how to manage multiplies request in microservices? - python

My goal is to build synchronous management for multiplies requests.
I need to pass the requests to limit microservices.
when i get amount of requests more then the limit of the services.
I need to manage the requests (maybe queue) and I don't know where the requests are waiting and to where will the answers return from the service?
I need to stay synchronous and build this in python.
I try to work with nameko for framework.
thank for your help.

Related

Performance of Singleton Controllers in fastapi

I am having trouble troubleshooting a performance bottleneck in one of my APIs and I have a theory that I need somebody with deeper knowledge of Python to validate for me.
I have a fastapi web service and a nodejs web service deployed on AWS. The node.js api is performing perfeclty under heavier loads, multiple concurrent requests taking same amount of time to be served.
My fastapi service however, is performing absurdly. If I make two requests concurrently, only one is served while the other has to wait for the first to be finished, hence the response time for the second request is twice as long as the first one.
My theory is that I am using Singleton pattern to instantiate the controller after a request comes to a route and the object already being in use and locked is causing the second request to wait until the first is resolved. Could this be it or am I missing something very obvious here? 2 concurrent requests should absolutely not be a problem for any type of web server.

Is aiohttp truly asynchronous?

I can't find a straight answer on this. Is aiohttp asynchronous in the way that Javascript is? I'm building a project where I'll need to send a large number of requests to an endpoint. If I use requests, I'll need to wait for the response before I can send the next request. I researched a few async requests libraries in Python, but those all seemed to start new threads to send requests. If I understand asynchronous correctly, starting a new thread pretty much defeats the purpose of asynchronous code (tell me if I'm wrong here). I'm really looking for a single-threaded asynchronous requests library in Python that will send requests much like Javascript would (that is, will send another request while waiting for a response from the first and not start multiple threads). Is aiohttp what I'm looking for?

What is a good way to not create multiple requests to an API caching the result, if the requests are close in time?

I have a Python webapp, and it's an intermediary service that requests data from another API and caches the data.
The thing is, each requests to the external API requires some paperwork so we better cache that in our side.
Supposing I have two requests to my service, at times T and T + 1 and the API takes 3 seconds to respond, both requests will check that I don't have the result stored and then will try to request to the external API.
What are good mechanisms in Python to do some kind of semaphore until the first requests finishes, then the second request can read from the cache?

Ideal method for sending multiple HTTP requests over Python? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Multiple (asynchronous) connections with urllib2 or other http library?
I am working on a Linux web server that runs Python code to grab realtime data over HTTP from a 3rd party API. The data is put into a MySQL database.
I need to make a lot of queries to a lot of URL's, and I need to do it fast (faster = better). Currently I'm using urllib3 as my HTTP library.
What is the best way to go about this? Should I spawn multiple threads (if so, how many?) and have each query for a different URL?
I would love to hear your thoughts about this - thanks!
If a lot is really a lot than you probably want use asynchronous io not threads.
requests + gevent = grequests
GRequests allows you to use Requests with Gevent to make asynchronous HTTP Requests easily.
import grequests
urls = [
'http://www.heroku.com',
'http://tablib.org',
'http://httpbin.org',
'http://python-requests.org',
'http://kennethreitz.com'
]
rs = (grequests.get(u) for u in urls)
grequests.map(rs)
You should use multithreading as well as pipelining requests. For example search->details->save
The number of threads you can use doesn't depend on your equipment only. How many requests the service can serve? How many concurrent requests does it allow to run? Even your bandwidth can be a bottleneck.
If you're talking about a kind of scraping - the service could block you after certain limit of requests, so you need to use proxies or multiple IP bindings.
As for me, in the most cases, I can run 50-300 concurrent requests on my laptop from python scripts.
Sounds like an excellent application for Twisted. Here are some web-related examples, including how to download a web page. Here is a related question on database connections with Twisted.
Note that Twisted does not rely on threads for doing multiple things at once. Rather, it takes a cooperative multitasking approach---your main script starts the reactor and the reactor calls functions that you set up. Your functions must return control to the reactor before the reactor can continue working.

of tornado and blocking code

I am trying to move away from CherryPy for a web service that I am working on and one alternative that I am considering is Tornado. Now, most of my requests look on the backend something like:
get POST data
see if I have it in cache (database access)
if not make multiple HTTP requests to some other web service which can take even a good few seconds depending on the number of requests
I keep hearing that one should not block the tornado main loop; I am wondering if all of the above code is executed in the post() method of a RequestHandler, does this mean that I am blocking the code ? And if so, what's the appropriate approach to use tornado with the above requirements.
Tornado comes shipped with an asynchronous (actually two iirc) http client (AsyncHTTPClient). Use that one if you need to do additional http requests.
The database lookup should also be done using an asynchronous client in order to not block the tornado ioloop/mainloop. I know there are a couple of tornado tailor made database clients (e.g redis, mongodb) out there. The mysql lib is included in the tornado distribution.

Categories