Unable to create order in kucoin futures from python - python

I was wondering how i can create kucoin futures order with python.
Any help would be appreciated
I tried kucoin-futures documentation but it did not work

Related

Get data from Scroll in http request api elasticsearch

I'm trying to write a code in python to get all the data from an api through an http request.
I am wondering if there is a way to use the _scroll_id and it's contents in python. If so, how do I implement it or could you share some documentation regarding it?
All the documentation regarding elasticsearch in python is using a localhost...
Any leads would be highly appreciated.
Elasticsearch has a Python library that helps with pinging the database.
You can use the scan() helper function. Internally, it calls the scroll API so you don't have to worry about any of that.
For the last part of your question, you'd have to follow the tutorial to see how to connect to different databases.
https://elasticsearch-py.readthedocs.io/en/v8.3.3/helpers.html#scan

How retrieve discord sever stats for other peoples servers using python script?

Does anyone know of how I can collect the 'server stats' of all the discord severs I am joined to. I would like to create a python script that can use the server ID and automatically collect the server stats. Is it possible to do this using their API. I tried to use the API but I could only retrieve the sever messages. I also tried to use selenium to scrape the 'server stats' values but this failed as I have to log in to discord. Does anyone know of a way I can do this?
Thanks in advance....
You can use the get_guild() method to get the guild and access it's properties.

How to control rate limiting for free premium twitter api

I'm using Python 3.7 with searchtweets to scrape some basic data from the twitter api, and i want to be able to suspend process when the rate limit has been hit. I currently only have access to the free packages (sandbox and the limited premium). I was able to do this with tweepy and the sandbox package, which doesn't allow historical searches.
I have progressed to try to get more data with historical tweets using searchtweets, but this does not seem to have any access to the rate_limit_status that was available in the sandbox package (open issue).
I have tried the suggestion in the response to the issue, but it still returns to 429 error code
Does anyone know how to access the rate_limit_status in searchtweets or is there any other way to determine how many request i have remaining?

Async HTTP server with scrapy and mongodb in python

I am basically trying to start an HTTP server which will respond with content from a website which I can crawl using Scrapy. In order to start crawling the website I need to login to it and to do so I need to access a DB with credentials and such. The main issue here is that I need everything to be fully asynchronous and so far I am struggling to find a combination that will make everything work properly without many sloppy implementations.
I already got Klein + Scrapy working but when I get to implementing DB accesses I get all messed up in my head. Is there any way to make PyMongo asynchronous with twisted or something (yes, I have seen TxMongo but the documentation is quite bad and I would like to avoid it. I have also found an implementation with adbapi but I would like something more similar to PyMongo).
Trying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff but then I find myself at an impasse with Scrapy integration.
I have seen things like scrapa, scrapyd and ScrapyRT but those don't really work for me. Are there any other options?
Finally, if nothing works, I'll just use aiohttp and instead of Scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response. Any advice on how to proceed down that road?
Thanks for your attention, I'm quite a noob in this area so I don't know if I'm making complete sense. Regardless, any help will be appreciated :)
Is there any way to make pymongo asynchronous with twisted
No. pymongo is designed as a synchronous library, and there is no way you can make it asynchronous without basically rewriting it (you could use threads or processes, but that is not what you asked, also you can run into issues with thread-safeness of the code).
Trying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff
It doesn't. aiohttp is a http library - it can do http asynchronously and that is all, it has nothing to help you access databases. You'd have to basically rewrite pymongo on top of it.
Finally, if nothing works, I'll just use aiohttp and instead of scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response.
That means lots of work for not using scrapy, and it won't help you with the pymongo issue - you still have to rewrite pymongo!
My suggestion is - learn txmongo! If you can't and want to rewrite it, use twisted.web to write it instead of aiohttp since then you can continue using scrapy!

How to check user online with django and GAE?

I'm using Django with Google App Engine and I want to build a module for checking online/offline user .
But GAE don't support session so it's hard for me to find way to do it.
how can i resolve this problem? Any ideas would be appreciated, thanks.
A session library won't solve this, because HTTP is stateless. Using sessions, you can determine when someone last made a request, but that doesn't tell you if they're "online" or not - they could have immediately closed their browser tab, or they could leave it open for a week.
If you really, really need to do this, you could use the channel API. Alternately, you could use a session library, or log users in using the Users API, and list as 'online' anyone who's made requests in the last n minutes.

Categories