How do I slow API calls for Binance API - python

What must I add to my code so I stop running into API rate limit errors? I believe I run into this error because my script is making to many API calls to the Binance servers.
My code is:
from binance.client import Client
client = Client(api_key=***, api_secret=***, tld='us')
The client module below uses the requests library. The Client constructor has an optional parameter: requests_params=None and allows you to add a "Dictionary of requests params to use for all calls" (quote from documentation.)
I have looked through the requests documentation but could not find anything to fix this issue. I found another library called ratelimit but I do not know how to pass it through client() effectively.
The error message I receive is:
requests.exceptions.SSLError: HTTPSConnectionPool(host='api.binance.us', port=443): Max retries exceeded with url: /api/v1/ping (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))

You can simply add a delay using time.sleep in between your requests.
from time import sleep
# Adds a delay of 3 seconds
sleep(3)

have you tried a decorator? In my opinion a very clean and beautiful way for your problem :-)
Here an example:
import requests
from functools import wraps
import time
def delay(sleep_time:int):
def decorator(function):
#wraps(function)
def wrapper(*args, **kwargs):
time.sleep(sleep_time)
print(f"Sleeping {sleep_time} seconds")
return function(*args, **kwargs)
return wrapper
return decorator
#delay(5)
def get_data(url:str) -> requests.models.Response:
return requests.get(url)
while True:
print(get_data("https://www.google.com"))

Related

how to Async request in sanic parallel Work

Im not able to free the main function from this code so that tasks are completed in parallel and i can receive another get.
in this code when i open in chrome http://0.0.0.0:8082/envioavisos?test1=AAAAAA&test2=test the get_avisos_grupo() function is excecuted in secuence and not in parallel and untill the function ends and not able to send another request to http://0.0.0.0:8082/envioavisos?test1=AAAAAA&test2=test
#!/usr/bin/env python3
import asyncio
import time
from sanic import Sanic
from sanic.response import text
from datetime import datetime
import requests
avisos_ips = ['1.1.1.1','2.2.2.2']
app = Sanic(name='server')
async def get_avisos_grupo(ip_destino,test1,test2):
try:
try:
print(datetime.now().strftime("%d/%m/%Y %H:%M:%S,%f"),'STEP 2',ip_destino)
r = requests.post('http://{}:8081/avisosgrupo?test1={}&test2={}'.format(ip_destino,test1,test2), timeout=10)
await asyncio.sleep(5)
except Exception as e:
print('TIME OUT',str(e))
pass
except Exception as e:
print(str(e))
pass
#app.route("/envioavisos", methods=['GET','POST'])
async def avisos_telegram_send(request): ## enviar avisos
try:
query_components = request.get_args(keep_blank_values=True)
print(datetime.now().strftime("%d/%m/%Y %H:%M:%S,%f"),'>--------STEP 1',query_components['test1'][0])
for ip_destino in avisos_ips:
asyncio.ensure_future(get_avisos_grupo(ip_destino,query_components['test1'][0],query_components['test2'][0]))
except Exception as e:
print(str(e))
pass
print(datetime.now().strftime("%d/%m/%Y %H:%M:%S,%f"),'STEP 4')
return text('ok')
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8082, workers=4)
Expected result is to post everything in parallel.
I'm getting this result
06/04/2021 16:25:18,669074 STEP 2 1.1.1.1
TIME OUT HTTPConnectionPool(host='1.1.1.1', port=8081): Max retries exceeded with url: '))
06/04/2021 16:25:28,684200 STEP 2 2.2.2.2
TIME OUT HTTPConnectionPool(host='2.2.2.2', port=8081): Max retries exceeded with url: '))
i expect to have something like this
06/04/2021 16:25:18,669074 STEP 2 1.1.1.1
06/04/2021 16:25:28,684200 STEP 2 2.2.2.2
TIME OUT HTTPConnectionPool(host='1.1.1.1', port=8081): Max retries exceeded with url: '))
TIME OUT HTTPConnectionPool(host='2.2.2.2', port=8081): Max retries exceeded with url: '))
Asyncio is not a magic bullet that parallelizes operations. Indeed Sanic doesn't either. What it does is make efficient use of the processor to allow for multiple functions to "push the ball forward" a little at a time.
Everything runs in a single thread and a single process.
You are experiencing this because you are using a blocking HTTP call. You should replace requests with an async compatible utility so that Sanic can put the request aside to handle new requests while the outgoing operation takes place.
take a look at this:
https://sanicframework.org/en/guide/basics/handlers.html#a-word-about-async
A common mistake!
Don't do this! You need to ping a website. What do you use? pip install your-fav-request-library 🙈
Instead, try using a client that is async/await capable. Your server will thank you. Avoid using blocking tools, and favor those that play well in the asynchronous ecosystem. If you need recommendations, check out Awesome Sanic
Sanic uses httpx inside of its testing package (sanic-testing) 😉.

python tornado async client

I created batch delayed http (async) client which allows to trigger multiple async http requests and most importantly it allows to delay the start of requests so for example 100 requests are not triggered at a time.
But it has an issue. The http .fetch() method has a handleMethod parameter which handles the response, but I found out that if the delay (sleep) after the fetch isn't long enough the handle method is not even triggered. (maybe the request is killed or what meanwhile).
It is probably related to .run_sync method. How to fix that? I want to put delays but dont want this issue happen.
I need to parse the response regardless how long the request takes, regardless the following sleep call (that call has another reason as i said, and should not be related to response handling at all)
class BatchDelayedHttpClient:
def __init__(self, requestList):
# class members
self.httpClient = httpclient.AsyncHTTPClient()
self.requestList = requestList
ioloop.IOLoop.current().run_sync(self.execute)
#gen.coroutine
def execute(self):
print("exec start")
for request in self.requestList:
print("requesting " + request["url"])
self.httpClient.fetch(request["url"], request["handleMethod"], method=request["method"], headers=request["headers"], body=request["body"])
yield gen.sleep(request["sleep"])
print("exec end")

Gevent async server with blocking requests

I have what I would think is a pretty common use case for Gevent. I need a UDP server that listens for requests, and based on the request submits a POST to an external web service. The external web service essentially only allows one request at a time.
I would like to have an asynchronous UDP server so that data can be immediately retrieved and stored so that I don't miss any requests (this part is easy with the DatagramServer gevent provides). Then I need some way to send requests to the external web service serially, but in such a way that it doesn't ruin the async of the UDP server.
I first tried monkey patching everything and what I ended up with was a quick solution, but one in which my requests to the external web service were not rate limited in any way and which resulted in errors.
It seems like what I need is a single non-blocking worker to send requests to the external web service in serial while the UDP server adds tasks to the queue from which the non-blocking worker is working.
What I need is information on running a gevent server with additional greenlets for other tasks (especially with a queue). I've been using the serve_forever function of the DatagramServer and think that I'll need to use the start method instead, but haven't found much information on how it would fit together.
Thanks,
EDIT
The answer worked very well. I've adapted the UDP server example code with the answer from #mguijarr to produce a working example for my use case:
from __future__ import print_function
from gevent.server import DatagramServer
import gevent.queue
import gevent.monkey
import urllib
gevent.monkey.patch_all()
n = 0
def process_request(q):
while True:
request = q.get()
print(request)
print(urllib.urlopen('https://test.com').read())
class EchoServer(DatagramServer):
__q = gevent.queue.Queue()
__request_processing_greenlet = gevent.spawn(process_request, __q)
def handle(self, data, address):
print('%s: got %r' % (address[0], data))
global n
n += 1
print(n)
self.__q.put(n)
self.socket.sendto('Received %s bytes' % len(data), address)
if __name__ == '__main__':
print('Receiving datagrams on :9000')
EchoServer(':9000').serve_forever()
Here is how I would do it:
Write a function taking a "queue" object as argument; this function will continuously process items from the queue. Each item is supposed to be a request for the web service.
This function could be a module-level function, not part of your DatagramServer instance:
def process_requests(q):
while True:
request = q.get()
# do your magic with 'request'
...
in your DatagramServer, make the function running within a greenlet (like a background task):
self.__q = gevent.queue.Queue()
self.__request_processing_greenlet = gevent.spawn(process_requests, self.__q)
when you receive the UDP request in your DatagramServer instance, you push the request to the queue
self.__q.put(request)
This should do what you want. You still call 'serve_forever' on DatagramServer, no problem.

Python & URLLIB2 - Request webpage but don't wait for response

In python, how would I go about making a http request but not waiting for a response. I don't care about getting any data back, I just need to server to register a page request.
Right now I use this code:
urllib2.urlopen("COOL WEBSITE")
But obviously this pauses the script until a a response is returned, I just want to fire off a request and move on.
How would I do this?
What you want here is called Threading or Asynchronous.
Threading:
Wrap the call to urllib2.urlopen() in a threading.Thread()
Example:
from threading import Thread
def open_website(url):
return urllib2.urlopen(url)
Thread(target=open_website, args=["http://google.com"]).start()
Asynchronous:
Unfortunately there is no standard way of doing this in the Python standard library.
Use the requests library which has this support.
Example:
from requests import async
async.get("http://google.com")
There is also a 3rd option using the restclient library which has
builtin (has for some time) Asynchronous support:
from restclient import GET
res = GET("http://google.com", async=True, resp=True)
Use thread:
import threading
threading.Thread(target=urllib.urlopen, args=('COOL WEBSITE',)).start()
Don't forget args argument should be tuple. That's why there's trailing ,.
You can do this with requests library as follows
import requests
try:
requests.get("http://127.0.0.1:8000/test/",timeout=10)
except requests.exceptions.ReadTimeout: #this confirms you that the request has reached server
do_something
except:
print "unable to reach server"
raise
from the above code you can send async requests without getting response. Specify timeout according to your need. if not it will not time out.
gevent may be a proper choice.
First patch socket:
import gevent
import gevent.monkey
monkey.patch_socket()
monkey.patch_ssl()
Then use gevent.spawn() to encapulate your requests to generate greenlets. It will not block the main thread and be very fast!
Here's a simple tutorial.

Why doesn't requests.get() return? What is the default timeout that requests.get() uses?

In my script, requests.get never returns:
import requests
print ("requesting..")
# This call never returns!
r = requests.get(
"http://www.some-site.example",
proxies = {'http': '222.255.169.74:8080'},
)
print(r.ok)
What could be the possible reason(s)? Any remedy? What is the default timeout that get uses?
What is the default timeout that get uses?
The default timeout is None, which means it'll wait (hang) until the connection is closed.
Just specify a timeout value, like this:
r = requests.get(
'http://www.example.com',
proxies={'http': '222.255.169.74:8080'},
timeout=5
)
From requests documentation:
You can tell Requests to stop waiting for a response after a given
number of seconds with the timeout parameter:
>>> requests.get('http://github.com', timeout=0.001)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
requests.exceptions.Timeout: HTTPConnectionPool(host='github.com', port=80): Request timed out. (timeout=0.001)
Note:
timeout is not a time limit on the entire response download; rather,
an exception is raised if the server has not issued a response for
timeout seconds (more precisely, if no bytes have been received on the
underlying socket for timeout seconds).
It happens a lot to me that requests.get() takes a very long time to return even if the timeout is 1 second. There are a few way to overcome this problem:
1. Use the TimeoutSauce internal class
From: https://github.com/kennethreitz/requests/issues/1928#issuecomment-35811896
import requests from requests.adapters import TimeoutSauce
class MyTimeout(TimeoutSauce):
def __init__(self, *args, **kwargs):
if kwargs['connect'] is None:
kwargs['connect'] = 5
if kwargs['read'] is None:
kwargs['read'] = 5
super(MyTimeout, self).__init__(*args, **kwargs)
requests.adapters.TimeoutSauce = MyTimeout
This code should cause us to set the read timeout as equal to the
connect timeout, which is the timeout value you pass on your
Session.get() call. (Note that I haven't actually tested this code, so
it may need some quick debugging, I just wrote it straight into the
GitHub window.)
2. Use a fork of requests from kevinburke: https://github.com/kevinburke/requests/tree/connect-timeout
From its documentation: https://github.com/kevinburke/requests/blob/connect-timeout/docs/user/advanced.rst
If you specify a single value for the timeout, like this:
r = requests.get('https://github.com', timeout=5)
The timeout value will be applied to both the connect and the read
timeouts. Specify a tuple if you would like to set the values
separately:
r = requests.get('https://github.com', timeout=(3.05, 27))
NOTE: The change has since been merged to the main Requests project.
3. Using evenlet or signal as already mentioned in the similar question:
Timeout for python requests.get entire response
I wanted a default timeout easily added to a bunch of code (assuming that timeout solves your problem)
This is the solution I picked up from a ticket submitted to the repository for Requests.
credit: https://github.com/kennethreitz/requests/issues/2011#issuecomment-477784399
The solution is the last couple of lines here, but I show more code for better context. I like to use a session for retry behaviour.
import requests
import functools
from requests.adapters import HTTPAdapter,Retry
def requests_retry_session(
retries=10,
backoff_factor=2,
status_forcelist=(500, 502, 503, 504),
session=None,
) -> requests.Session:
session = session or requests.Session()
retry = Retry(
total=retries,
read=retries,
connect=retries,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
adapter = HTTPAdapter(max_retries=retry)
session.mount('http://', adapter)
session.mount('https://', adapter)
# set default timeout
for method in ('get', 'options', 'head', 'post', 'put', 'patch', 'delete'):
setattr(session, method, functools.partial(getattr(session, method), timeout=30))
return session
then you can do something like this:
requests_session = requests_retry_session()
r = requests_session.get(url=url,...
In my case, the reason of "requests.get never returns" is because requests.get() attempt to connect to the host resolved with ipv6 ip first. If something went wrong to connect that ipv6 ip and get stuck, then it retries ipv4 ip only if I explicit set timeout=<N seconds> and hit the timeout.
My solution is monkey-patching the python socket to ignore ipv6(or ipv4 if ipv4 not working), either this answer or this answer are works for me.
You might wondering why curl command is works, because curl connect ipv4 without waiting for ipv6 complete. You can trace the socket syscalls with strace -ff -e network -s 10000 -- curl -vLk '<your url>' command. For python, strace -ff -e network -s 10000 -- python3 <your python script> command can be used.
Patching the documented "send" function will fix this for all requests - even in many dependent libraries and sdk's. When patching libs, be sure to patch supported/documented functions, not TimeoutSauce - otherwise you may wind up silently losing the effect of your patch.
import requests
DEFAULT_TIMEOUT = 180
old_send = requests.Session.send
def new_send(*args, **kwargs):
if kwargs.get("timeout", None) is None:
kwargs["timeout"] = DEFAULT_TIMEOUT
return old_send(*args, **kwargs)
requests.Session.send = new_send
The effects of not having any timeout are quite severe, and the use of a default timeout can almost never break anything - because TCP itself has default timeouts as well.
Reviewed all the answers and came to conclusion that the problem still exists. On some sites requests may hang infinitely and using multiprocessing seems to be overkill. Here's my approach(Python 3.5+):
import asyncio
import aiohttp
async def get_http(url):
async with aiohttp.ClientSession(conn_timeout=1, read_timeout=3) as client:
try:
async with client.get(url) as response:
content = await response.text()
return content, response.status
except Exception:
pass
loop = asyncio.get_event_loop()
task = loop.create_task(get_http('http://example.com'))
loop.run_until_complete(task)
result = task.result()
if result is not None:
content, status = task.result()
if status == 200:
print(content)
UPDATE
If you receive a deprecation warning about using conn_timeout and read_timeout, check near the bottom of THIS reference for how to use the ClientTimeout data structure. One simple way to apply this data structure per the linked reference to the original code above would be:
async def get_http(url):
timeout = aiohttp.ClientTimeout(total=60)
async with aiohttp.ClientSession(timeout=timeout) as client:
try:
etc.

Categories