I am trying to extend this repo with support for cryptocurrency trading using Python (will create a PR once completed).
I have all the API methods working with the exception of actually placing trades.
The endpoint for placing crypto orders is https://nummus.robinhood.com/orders/
This endpoint expects a POST request to be made with the body in JSON format along with the following headers:
"Accept": "application/json",
"Accept-Encoding": "gzip, deflate",
"Accept-Language": "en;q=1, fr;q=0.9, de;q=0.8, ja;q=0.7, nl;q=0.6, it;q=0.5",
"Content-Type": "application/json",
"X-Robinhood-API-Version": "1.0.0",
"Connection": "keep-alive",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36",
"Origin": "https://robinhood.com",
"Authorization": "Bearer <access_token>"
The payload I'm sending looks like this:
{
'account_id': <account id>,
'currency_pair_id': '3d961844-d360-45fc-989b-f6fca761d511', // this is BTC
'price': <BTC price derived using quotes API>,
'quantity': <BTC quantity>,
'ref_id': str(uuid.uuid4()), // Im not sure why this is needed but I saw someone else use [the uuid library][2] to derive this value like this
'side': 'buy',
'time_in_force': 'gtc',
'type': 'market'
}
The response I get is as follows:
400 Client Error: Bad Request for url: https://nummus.robinhood.com/orders/
I can confirm that I am able to authenticate successfully since I am able to use the https://nummus.robinhood.com/accounts/ and https://nummus.robinhood.com/holdings/ endpoints to view my account data and holdings.
I also believe that my access_token in the Authentication header is correct because if I set it to some random value (Bearer abc123, for instance) I get a 401 Client Error: Unauthorized response.
I think the issue has to do with the payload but I am not able to find good documentation for the nummus.robinhood.com API.
Does anyone see how/whether my request payload is malformed and/or can point me in the right direction to documentation for the nummus.robinhood.com/orders endpoint?
You need to pass the json payload as the value to the parameter json in the requests post call
import requests
json_payload = {
'account_id': <account id>,
'currency_pair_id': '3d961844-d360-45fc-989b-f6fca761d511', // this is BTC
'price': <BTC price derived using quotes API>,
'quantity': <BTC quantity>,
'ref_id': str(uuid.uuid4()), // Im not sure why this is needed but I saw someone else use [the uuid library][2] to derive this value like this
'side': 'buy',
'time_in_force': 'gtc',
'type': 'market'
}
headers = {
"Accept": "application/json",
"Accept-Encoding": "gzip, deflate",
"Accept-Language": "en;q=1, fr;q=0.9, de;q=0.8, ja;q=0.7, nl;q=0.6, it;q=0.5",
"Content-Type": "application/json",
"X-Robinhood-API-Version": "1.0.0",
"Connection": "keep-alive",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36",
"Origin": "https://robinhood.com",
"Authorization": "Bearer <access_token>"
}
url = "https://nummus.robinhood.com/orders/"
s = requests.Session()
res = s.request("post", url, json=json_payload, timeout=10, headers=headers)
print(res.status_code)
print(res.text)
Related
I want to parse product data from this page, but with requests.get it is not work. So I inspected page networks and found intereste link:
I tried to send post request to this link with correct form data, but in response i got only {"message":"Expecting value (near 1:1)","status":400}
How can I get correct product data from this page?
It looks like your post is mostly code; please add some more details.
It looks like your post is mostly code; please add some more details.
import requests
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.0.0 Safari/537.36",
"Accept": '*/*',
"Accept-Encoding": "gzip, deflate, br",
'Connection': 'keep-alive',
'Host': 'cgrd9wlxe4-dsn.algolia.net',
'Origin': 'https://www.eprice.it',
'Referer': "https://www.eprice.it/",
'Content-Type': 'application/x-www-form-urlencoded',
"Sec-Fetch-Dest": 'empty',
"Sec-Fetch-Mode": 'cors',
'Sec-Fetch-Site': 'cross-site',
'sec-ch-ua': "Not A;Brand",
"sec-ch-ua-mobile": '?0',
"sec-ch-ua-platform": "Windows",
}
form_data = {
"requests": [
{
"indexName": "prd_products_suggest",
"params": {
"highlightPreTag": "<strong>",
"highlightPostTag": "</strong>",
"query": 6970995781939,
"hitsPerPage": 36,
"clickAnalytics": 1,
"analyticsTags": ["main", "desktop"],
"ruleContexts": ["ovr", "desktop", "t1"],
"facetingAfterDistinct": 1,
"getRankingInfo": 1,
"page": 0,
"maxValuesPerFacet": 10,
"facets": ["manufacturer", "offer.price", "scegliPer", "offer.shopType",
"reviews.avgRatingInt",
"navigation.lvl0,navigation.lvl1,navigation.lvl2,navigation.lvl3"],
"tagFilters": ""
}
},
{
"indexName": "prd_products_suggest_b",
"params": {
"query": 6970995781939,
"hitsPerPage": 10,
"clickAnalytics": 1,
"analyticsTags": ["car_offerte_oggi", "desktop"],
"ruleContexts": ["ovr", "car_offerte_oggi", "desktop"],
"getRankingInfo": 1,
"page": 0,
"maxValuesPerFacet": 10,
"minProximity": 2,
"facetFilters": [],
"facets": ["manufacturer", "offer.price", "scegliPer", "offer.shopType", "reviews.avgRatingInt",
"navigation.lvl0,navigation.lvl1,navigation.lvl2,navigation.lvl3"],
"tagFilters": ""
}
}
]
}
response = requests.post(
url="https://cgrd9wlxe4-dsn.algolia.net/1/indexes/*/queries?"
"x-algolia-agent=Algolia%20for%20JavaScript%20(4.11.0)%3B%20Browser%20(lite)&"
"x-algolia-api-key=e9c9895532cb88b620f96f3e6617c00f&"
"x-algolia-application-id=CGRD9WLXE4",
headers=headers,
data=form_data
)
print(response.text)
Algolia is a hosted search API, a retail company can index their product list into Algolia and then integrate their front-end to it to query for products to display to customers.
When you're inspecting the page networks, you're seeing the calls out to this search API that are formed by the owner of the website, the retail company, likely using the client you can download from Algolia directly.
I'm not sure why the form-data isn't working, if you download the client in Python, you will find their own example of how to integrate to it. But, the point of the context above is that the hosted Search API takes requests in different ways, so you can take the source message you have, set the Content-Type header to 'application-json' and you'll get a response.
PostMan Call
The full API documentation - https://www.algolia.com/doc/rest-api/search/#search-index-post
I´m getting stuck on trying to bring P2P selling data from Binance using Python. Running the code below I can bring the information from de BUY section but I´m not being able to see the information from the SELL section. Can you help me?
The following code runs right but it only shows the BUY section of Binance P2P. When I try to use this URL for example (https://p2p.binance.com/es/trade/sell/BUSD?fiat=ARS&payment=ALL) nothing changes.
url_2 = 'https://p2p.binance.com/bapi/c2c/v2/friendly/c2c/adv/search'
p2p = requests.get(url)
q = p2p.text
w = json.loads(q)
e = w['data']
df = pd.json_normalize(e)
df
To access the p2p data you need to POST to https://p2p.binance.com/bapi/c2c/v2/friendly/c2c/adv/search
So an e.g.:
headers = {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"Content-Length": "123",
"content-type": "application/json",
"Host": "p2p.binance.com",
"Origin": "https://p2p.binance.com",
"Pragma": "no-cache",
"TE": "Trailers",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0"
}
data = {
"USDT": {
"asset": "USDT",
"fiat": "ZAR",
"merchantCheck": True,
"page": 1,
"payTypes": ["BANK"],
"publisherType": None,
"rows": 20,
"tradeType": "Sell,
},
r = requests.post('https://p2p.binance.com/bapi/c2c/v2/friendly/c2c/adv/search', headers=headers, json=data).json()
You can change data accordingly to your needs. (e.g. change "Tradetype" to "Buy") unfortunately the API isn't documented so working out the parameters requires some trial and error. This Question has a good list of the parameters.
I am trying to get the data from this website, https://en.macromicro.me/charts/947/commodity-ccfi-scfi ,
I understand that the data is called from an API, how do I find out how the call is made and how do I extract it using python?
I am new to python and html in general so I have no idea where to start.
I tried,
import requests
from bs4 import BeautifulSoup
import json
import pandas as pd
urlheader = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36",
"X-Requested-With": "XMLHttpRequest",
"Accept": "application/json, text/javascript, */*; q=0.01",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.9",
"Authorization": "Bearer 640eabc473294fbac27930ef08d28ab4",
"Connection": "keep-alive",
"Cookie": "PHPSESSID=5q99iuiarvf1ba2lafh6je5hr5; _ga=GA1.2.628840989.1624431403; _gid=GA1.2.146418174.1624431403; _fbp=fb.1.1624431403269.1337227854; _hjTLDTest=1; _hjid=89fd1c1b-93a7-4da9-bf90-46efcb6aae15; _hjFirstSeen=1",
"DNT": "1",
"Host": "en.macromicro.me",
"Referer": "https://en.macromicro.me/charts/947/commodity-ccfi-scfi",
}
url = "https://en.macromicro.me/charts/data/947"
req = requests.post(url,headers=urlheader)
soup = BeautifulSoup(req.content, "lxml")
print(soup)
But I get the following error
<html><body><p>{"status":"Method Not Allowed","code":405,"text":"HTTP 405 (POST \/charts\/data\/947)","level":0}</p></body></html>
You can make a GET request to the API using requests and convert the response to json.
resp has the data in json format and you can easily extract the info that you need.
import requests
url = "https://en.macromicro.me/charts/data/947"
resp = requests.get(url)
resp = resp.json()
I'm trying to implement the Yandex OCR translator tool into my code. With the help of Burp Suite, I managed to find that the following request is the one that is used to send the image:
I'm trying to emulate this request with the following code:
import requests
from requests_toolbelt import MultipartEncoder
files={
'file':("blob",open("image_path", 'rb'),"image/jpeg")
}
#(<filename>, <file object>, <content type>, <per-part headers>)
burp0_url = "https://translate.yandex.net:443/ocr/v1.1/recognize?srv=tr-image&sid=9b58493f.5c781bd4.7215c0a0&lang=en%2Cru"
m = MultipartEncoder(files, boundary='-----------------------------7652580604126525371226493196')
burp0_headers = {"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:65.0) Gecko/20100101 Firefox/65.0", "Accept": "*/*", "Accept-Language": "en-US,en;q=0.5", "Accept-Encoding": "gzip, deflate", "Referer": "https://translate.yandex.com/", "Content-Type": "multipart/form-data; boundary=-----------------------------7652580604126525371226493196", "Origin": "https://translate.yandex.com", "DNT": "1", "Connection": "close"}
print(requests.post(burp0_url, headers=burp0_headers, files=m.to_string()).text)
though sadly it yields the following output:
{"error":"BadArgument","description":"Bad argument: file"}
Does anyone know how this could be solved?
Many thanks in advance!
You are passing the MultipartEncoder.to_string() result to the files parameter. You are now asking requests to encode the result of the multipart encoder to a multipart component. That's one time too many.
You don't need to replicate every byte here, just post the file, and perhaps set the user agent, referer, and origin:
files = {
'file': ("blob", open("image_path", 'rb'), "image/jpeg")
}
url = "https://translate.yandex.net:443/ocr/v1.1/recognize?srv=tr-image&sid=9b58493f.5c781bd4.7215c0a0&lang=en%2Cru"
headers = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:65.0) Gecko/20100101 Firefox/65.0",
"Referer": "https://translate.yandex.com/",
"Origin": "https://translate.yandex.com",
}
response = requests.post(url, headers=headers, files=files)
print(response.status)
print(response.json())
The Connection header is best left to requests, it can control when a connection should be kept alive just fine. The Accept* headers are there to tell the server what your client can handle, and requests sets those automatically too.
I get a 200 OK response with that code:
200
{'data': {'blocks': []}, 'status': 'success'}
However, if you don't set additional headers (remove the headers=headers argument), the request also works, so Yandex doesn't appear to be filtering for robots here.
https://open.spotify.com/search/results/cheval is the link that triggers various intermediary requests, one being the attempted request below.
When running the following request in Postman (Chrome plugin), response cookies (13) are shown but do not seem to exist when running this request in Python (response.cookies is empty). I have also tried using a session, but with the same result.
update: Although these cookies were retrieved after using Selenium (to login/solve captcha and transfer the login cookies to the session to use for the following request, it's still unknown what variable/s are required for the target cookies to be returned with that request).
How can those response cookies be retrieved (if at all) with Python?
url = "https://api.spotify.com/v1/search"
querystring = {"type":"album,artist,playlist,track","q":"cheval*","decorate_restrictions":"true","best_match":"true","limit":"50","anonymous":"false","market":"from_token"}
headers = {
'access-control-request-method': "GET",
'origin': "https://open.spotify.com",
'x-devtools-emulate-network-conditions-client-id': "0959BC056CD6303CAEC3E2E5D7796B72",
'user-agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36",
'access-control-request-headers': "authorization",
'accept': "*/*",
'accept-encoding': "gzip, deflate, br",
'accept-language': "en-US,en;q=0.9",
'cache-control': "no-cache",
'postman-token': "253b0e50-7ef1-759a-f7f4-b09ede65e462"
}
response = requests.request("OPTIONS", url, headers=headers, params=querystring)
print(response.text)