I'm trying to get binance Futures order history data using API. So I asked for data from binance, got the answer "Your application for historical futures order book data has been approved, please follow our Github guidance to access with your whitelisted account API key" and I have set up the API as follows.
And I have modified the Enable Symbol Whitelist like this:
The next step, I followed Github guidance: https://github.com/binance/binance-public-data/tree/master/Futures_Order_Book_Download
which has the following sample code:
"""
This example python script shows how to download the Historical Future Order Book level 2 Data via API.
The data download API is part of the Binance API (https://binance-docs.github.io/apidocs/spot/en/#general-api-information).
For how to use it, you may find info there with more examples, especially SIGNED Endpoint security as in https://binance-docs.github.io/apidocs/spot/en/#signed-trade-user_data-and-margin-endpoint-security
Before executing this file, please note:
- The API account needs to have a Futures account to access Futures data.
- The API key has been whitelisted to access the data.
- Read the comments section in this file to know where you should specify your request values.
"""
# Install the following required packages
import requests
import time
import hashlib
import hmac
from urllib.parse import urlencode
S_URL_V1 = "https://api.binance.com/sapi/v1"
# Specify the api_key and secret_key with your API Key and secret_key
api_key = "your_api_key"
secret_key = "your_secret_key "
# Specify the four input parameters below:
symbol = "ADAUSDT" # specify the symbol name
startTime = 1635561504914 # specify the starttime
endTime = 1635561604914 # specify the endtime
dataType = "T_DEPTH" # specify the dataType to be downloaded
# Function to generate the signature
def _sign(params={}):
data = params.copy()
ts = str(int(1000 * time.time()))
data.update({"timestamp": ts})
h = urlencode(data)
h = h.replace("%40", "#")
b = bytearray()
b.extend(secret_key.encode())
signature = hmac.new(b, msg=h.encode("utf-8"), digestmod=hashlib.sha256).hexdigest()
sig = {"signature": signature}
return data, sig
# Function to generate the download ID
def post(path, params={}):
sign = _sign(params)
query = urlencode(sign[0]) + "&" + urlencode(sign[1])
url = "%s?%s" % (path, query)
header = {"X-MBX-APIKEY": api_key}
resultPostFunction = requests.post(url, headers=header, timeout=30, verify=True)
return resultPostFunction
# Function to generate the download link
def get(path, params):
sign = _sign(params)
query = urlencode(sign[0]) + "&" + urlencode(sign[1])
url = "%s?%s" % (path, query)
header = {"X-MBX-APIKEY": api_key}
resultGetFunction = requests.get(url, headers=header, timeout=30, verify=True)
return resultGetFunction
"""
Beginning of the execution.
The final output will be:
- A link to download the specific data you requested with the specific parameters.
Sample output will be like the following: {'expirationTime': 1635825806, 'link': 'https://bin-prod-user-rebate-bucket.s3.amazonaws.com/future-data-download/XXX'
Copy the link to the browser and download the data. The link would expire after the expirationTime (usually 24 hours).
- A message reminding you to re-run the code and download the data hours later.
Sample output will be like the following: {'link': 'Link is preparing; please request later. Notice: when date range is very large (across months), we may need hours to generate.'}
"""
timestamp = str(
int(1000 * time.time())
) # current timestamp which serves as an input for the params variable
paramsToObtainDownloadID = {
"symbol": symbol,
"startTime": startTime,
"endTime": endTime,
"dataType": dataType,
"timestamp": timestamp,
}
# Calls the "post" function to obtain the download ID for the specified symbol, dataType and time range combination
path = "%s/futuresHistDataId" % S_URL_V1
resultDownloadID = post(path, paramsToObtainDownloadID)
print(resultDownloadID)
downloadID = resultDownloadID.json()["id"]
print(downloadID) # prints the download ID, example: {'id': 324225}
# Calls the "get" function to obtain the download link for the specified symbol, dataType and time range combination
paramsToObtainDownloadLink = {"downloadId": downloadID, "timestamp": timestamp}
pathToObtainDownloadLink = "%s/downloadLink" % S_URL_V1
resultToBeDownloaded = get(pathToObtainDownloadLink, paramsToObtainDownloadLink)
print(resultToBeDownloaded)
print(resultToBeDownloaded.json())
I have modified api_key and secret_key to my own keys and this is the result I got.
Can you tell me where I made a mistake? Thanks in advance for the answer.
Look at https://www.binance.com/en-NG/landing/data.
Futures Order Book Data Available only on Binance Futures. It requires
futures account be whitelisted first and can only be download via API.
Orderbook snapshot (S_Depth): Since January 2020, only on BTC/USDT
symbol. Tick-level orderbook (T_Depth): Since January 2020, on all
symbols
The page says you should to apply the Binance form to be whitelisted in futures section here:
https://docs.google.com/forms/d/e/1FAIpQLSexCgyvZEMI1pw1Xj6gwKtfQTYUbH5HrUQ0gwgPZtM9FaM2Hw/viewform
Second thing - you are interested in futures, not spots, so the url should be api.binance.com/fapi instead of api.binance.com/sapi
Third thing - API endpoint for order book is
GET /fapi/v1/depth
Related
Apologies for any incorrect formatting, long time since I posted anything on stack overflow.
I'm looking to send a json payload of data to Azure IoT Hub which I am then going to process using an Azure Function App to display real-time telemetry data in Azure Digital Twin.
I'm able to post the payload to IoT Hub and view it using the explorer fine, however my function is unable to take this and display this telemetry data in Azure Digital Twin. From Googling I've found that the json file needs to be utf-8 encrypted and set to application/json, which I think might be the problem with my current attempt at fixing this.
I've included a snipped of the log stream from my azure function app below, as shown the "body" part of the message is scrambled which is why I think it may be an issue in how the payload is encoded:
"iothub-message-source":"Telemetry"},"body":"eyJwb3dlciI6ICIxLjciLCAid2luZF9zcGVlZCI6ICIxLjciLCAid2luZF9kaXJlY3Rpb24iOiAiMS43In0="}
2023-01-27T13:39:05Z [Error] Error in ingest function: Cannot access child value on Newtonsoft.Json.Linq.JValue.
My current test code is below for sending payloads to IoT Hub, with the potential issue being that I'm not encoding the payload properly.
import datetime, requests
import json
deviceID = "JanTestDT"
IoTHubName = "IoTJanTest"
iotHubAPIVer = "2018-04-01"
iotHubRestURI = "https://" + IoTHubName + ".azure-devices.net/devices/" + deviceID + "/messages/events?api-version=" + iotHubAPIVer
SASToken = 'SharedAccessSignature'
Headers = {}
Headers['Authorization'] = SASToken
Headers['Content-Type'] = "application/json"
Headers['charset'] = "utf-8"
datetime = datetime.datetime.now()
payload = {
'power': "1.7",
'wind_speed': "1.7",
'wind_direction': "1.7"
}
payload2 = json.dumps(payload, ensure_ascii = False).encode("utf8")
resp = requests.post(iotHubRestURI, data=payload2, headers=Headers)
I've attempted to encode the payload correctly in several different ways including utf-8 within request.post, however this produces an error that a dict cannot be encoded or still has the body encrypted within the Function App log stream unable to decipher it.
Thanks for any help and/or guidance that can be provided on this - happy to elaborate further on anything that is not clear.
is there any particular reason why you want to use Azure IoT Hub Rest API end point instead of using Python SDK? Also, even though you see the values in JSON format when viewed through Azure IoT Explorer, the message format when viewed through a storage end point such as blob reveals a different format as you pointed.
I haven't tested the Python code with REST API, but I have a Python SDK that worked for me. Please refer the code sample below
import os
import random
import time
from datetime import date, datetime
from json import dumps
from azure.iot.device import IoTHubDeviceClient, Message
def json_serial(obj):
"""JSON serializer for objects not serializable by default json code"""
if isinstance(obj, (datetime, date)):
return obj.isoformat()
raise TypeError("Type %s not serializable" % type(obj))
CONNECTION_STRING = "<AzureIoTHubDevicePrimaryConnectionString>"
TEMPERATURE = 45.0
HUMIDITY = 60
MSG_TXT = '{{"temperature": {temperature},"humidity": {humidity}, "timesent": {timesent}}}'
def run_telemetry_sample(client):
print("IoT Hub device sending periodic messages")
client.connect()
while True:
temperature = TEMPERATURE + (random.random() * 15)
humidity = HUMIDITY + (random.random() * 20)
x = datetime.now().isoformat()
timesent = dumps(datetime.now(), default=json_serial)
msg_txt_formatted = MSG_TXT.format(
temperature=temperature, humidity=humidity, timesent=timesent)
message = Message(msg_txt_formatted, content_encoding="utf-8", content_type="application/json")
print("Sending message: {}".format(message))
client.send_message(message)
print("Message successfully sent")
time.sleep(10)
def main():
print("IoT Hub Quickstart #1 - Simulated device")
print("Press Ctrl-C to exit")
client = IoTHubDeviceClient.create_from_connection_string(CONNECTION_STRING)
try:
run_telemetry_sample(client)
except KeyboardInterrupt:
print("IoTHubClient sample stopped by user")
finally:
print("Shutting down IoTHubClient")
client.shutdown()
if __name__ == '__main__':
main()
You can edit the MSG_TXT variable in the code to match the payload format and pass the values. Note that the SDK uses Message class from Azure IoT Device library which has an overload for content type and content encoding. Here is how I have passed the overloads in the code message = Message(msg_txt_formatted, content_encoding="utf-8", content_type="application/json")
I have validated the message by routing to a Blob Storage container and could see the telemetry data in the JSON format. Please refer below image screenshot referring the data captured at end point.
Hope this helps!
Good morning. I created this morning a program in python which, giving in input the URI of an album, gives me back the list of the tracks present in that album (using the Spotify API).
Following I continued the program in order to download the features of the songs in that list.
Since I need to perform this operation for several albums, I then created a json file with all the URIs of all the albums I need and with a FOR cycle I extracted all the tracks for each album (and then I extracted the songs features).
The problem is: if I have to download the track list of for example 500 albums, how can I get the URIs of each album in an automatic way? It seems impossible to get each URI by hand.
Maybe there is a way to give in input the list of track names and, exploiting the API, receive as output directly the features of those songs? Or maybe there is a way of giving in input the list of the albums' names and receive as output the URIs?
Thank you.
I tried to start getting all the URIs by hand, but it's too slow and not very scalable (maybe one day I need to download the tracks of a thousand of albums..).
EDIT: -------------------------------------------------------
Here's the code.
first part
second part
Here is my code that does what you want:
import requests
import spotipy
from spotipy.oauth2 import SpotifyOAuth
import urllib3
from pprint import pprint
import json
import time
client_id = ""
client_secret = ""
redirect_uri = ""
scope = "playlist-modify-private,playlist-modify-public"
market = "NL" # Two letter code of your country
dl_playlist_id = "spotify:playlist:a1b2c3d4e5f6" # the playlist of albums you want to download
album_names_file = "albums.txt"
# Anti rate-limit code
session = requests.Session()
retry = urllib3.Retry(
total=0,
connect=None,
read=0,
allowed_methods=frozenset(["GET", "POST", "PUT", "DELETE"]),
status=0,
backoff_factor=0.3,
status_forcelist=(429, 500, 502, 503, 504),
respect_retry_after_header=False, # <---
)
adapter = requests.adapters.HTTPAdapter(max_retries=retry)
session.mount("http://", adapter)
session.mount("https://", adapter)
# log-in code
sp = spotipy.Spotify(
auth_manager=spotipy.SpotifyOAuth(
client_id=client_id,
client_secret=client_secret,
redirect_uri=redirect_uri,
scope=scope,
open_browser=False,
),
requests_session=session,
)
# Album names to IDs
with open(album_names_file) as f:
albums = f.readlines()
album_ids = []
for album in albums:
found = sp.search(album, limit=1, offset=0, type="album", market=market)
try:
album_ids.append(found["albums"]["items"][0]["id"])
except:
print(f"{album} not found.")
# add album tracks to playlist
for id in album_ids:
album_data = sp.album_tracks(album_id = id, limit=50, market=market)
uris = []
for item in album_data["items"]:
uris.append(item["uri"])
sp.playlist_add_items(dl_playlist_id, uris)
print("Done")
Note: albums.txt should be a plain text file with a list of albums, with each name bellow the other; not a json file. You also can add the artist name next to the album name.
I'm fairly new to Python and I'm trying to migrate subscriptions from an older youtube account to a newer one that I'll use going forward. I pulled my subscriptions export from the old one and have around 470+ subs that I'll need to migrate over.
I found this article which absolutely works with automatically subscribing to a youtube channel via their channel_id but it seems like in the key value pair I can only run the .py script once per value.
I tried all sorts of googling to see how I can include multiple values in the key (channelId) but it always only auto subs to the last one in the dictionary.
Can someone please help show me what I'm missing? I feel like there has to be a way to add multiple channelId values in there key dictionary, right?!
Here's what my code looks like > screenshot
import os
import google.oauth2.credentials
import google_auth_oauthlib.flow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from google_auth_oauthlib.flow import InstalledAppFlow
# The CLIENT_SECRETS_FILE variable specifies
# the name of a file that contains
# client_id and client_secret.
CLIENT_SECRETS_FILE = "client_secret.json"
# This scope allows for full read/write access
# to the authenticated user's account and
# requires requests to use an SSL connection.
SCOPES = ['https://www.googleapis.com/auth/youtube.force-ssl']
API_SERVICE_NAME = 'youtube'
API_VERSION = 'v3'
def get_authenticated_service():
flow = InstalledAppFlow.from_client_secrets_file(CLIENT_SECRETS_FILE, SCOPES)
credentials = flow.run_console()
return build(API_SERVICE_NAME, API_VERSION, credentials = credentials)
def print_response(response):
print(response)
# Build a resource based on a list of
# properties given as key-value pairs.
# Leave properties with empty values out
# of the inserted resource.
def build_resource(properties):
resource = {}
for p in properties:
# Given a key like "snippet.title", split into
# "snippet" and "title", where "snippet" will be
# an object and "title" will be a property in that object.
prop_array = p.split('.')
ref = resource
for pa in range(0, len(prop_array)):
is_array = False
key = prop_array[pa]
# For properties that have array values, convert a name like
# "snippet.tags[]" to snippet.tags, and set a flag to handle
# the value as an array.
if key[-2:] == '[]':
key = key[0:len(key)-2:]
is_array = True
if pa == (len(prop_array) - 1):
# Leave properties without values
# out of inserted resource.
if properties[p]:
if is_array:
ref[key] = properties[p].split(', ')
else:
ref[key] = properties[p]
elif key not in ref:
# For example, the property is "snippet.title",
# but the resource does not yet have a "snippet"
# object. Create the snippet object here.
# Setting "ref = ref[key]" means that in the
# next time through the "for pa in range ..." loop,
# we will be setting a property in the
# resource's "snippet" object.
ref[key] = {}
ref = ref[key]
else:
# For example, the property is "snippet.description",
# and the resource already has a "snippet" object.
ref = ref[key]
return resource
# Remove keyword arguments that are not set
def remove_empty_kwargs(**kwargs):
good_kwargs = {}
if kwargs is not None:
for key, value in kwargs.items():
if value:
good_kwargs[key] = value
return good_kwargs
def subscriptions_insert(client, properties, **kwargs):
resource = build_resource(properties)
kwargs = remove_empty_kwargs(**kwargs)
response = client.subscriptions().insert(
body = resource,**kwargs).execute()
return print_response(response)
if __name__ == '__main__':
# When running locally, disable OAuthlib's
# HTTPs verification. When running in production
# * do not * leave this option enabled.
os.environ['OAUTHLIB_INSECURE_TRANSPORT'] = '1'
client = get_authenticated_service()
subscriptions_insert(client,
{'snippet.resourceId.kind': 'youtube# channel',
'snippet.resourceId.channelId': 'UC09fL42MpkktKZWmWxYiDhw', 'UC0Q7Hlz75NYhYAuq6O0fqHw'},
part ='snippet')```
According to YouTube Data API v3 documentation (Subscriptions: insert endpoint and Subscriptions resource), it seems that you can only subscribe a channel at a time. As you have by default 10,000 of quota per day, except if you request extended quota, because Subscriptions: insert costs 50 of quota, then for 470+ subscriptions, you would need 3 days to proceed.
Otherwise you can proceed as follows, it seems that the first time I tried with ~500 channels I have been subscribed to ~290 of them but now I mostly only receive (when removing -H 'Accept-Encoding: gzip, deflate, br' from the cURL request):
{
"error": {
"code": 429,
"message": "Resource has been exhausted (e.g. check quota).",
"errors": [
{
"message": "Resource has been exhausted (e.g. check quota).",
"domain": "global",
"reason": "rateLimitExceeded"
}
],
"status": "RESOURCE_EXHAUSTED"
}
}
So it's an unsure method that you can try to deepen.
Ever wondered how to do that in a single request without using any quota?
Go on an ad hoc YouTube channel YOUR_CHANNEL that you want to subscribed to: https://www.youtube.com/channel/YOUR_CHANNEL_ID
Open the Network tab of your web-browser by using Ctrl + Shift + E (on Firefox) and filter XHR requests.
Now click on Subscribe.
You should see a request to subscribe, copy it as cURL (by right-clicking).
Change at the end
"channelIds":["YOUR_CHANNEL_ID"]
to:
"channelIds":["YOUR_CHANNEL_ID_0, YOUR_CHANNEL_ID_1, ..., YOUR_CHANNEL_ID_499"]
Where YOUR_CHANNEL_ID_0 is your YOUR_CHANNEL_ID and YOUR_CHANNEL_ID_1 the second channel you want to subscribe to and so forth.
Execute the modified cURL request in a terminal and that's it!
Note that this webpage contains a subscriptions count and this one contains all your subscriptions.
To get more than 249 different channels, I used:
import requests, json
channelIds = set()
pageToken = ''
API_KEY = 'AIzaSy...'
i = 0
while len(channelIds) < 250:
url = f'https://www.googleapis.com/youtube/v3/search?q={i}&type=channel&maxResults=50&key={API_KEY}'
if pageToken != '':
url += f"&pageToken={pageToken}"
content = requests.get(url).text
data = json.loads(content)
for item in data['items']:
channelIds.add(item['id']['channelId'])
print(len(channelIds))
if 'nextPageToken' in data:
pageToken = data['nextPageToken']
else:
break
i += 1
print('["' + '","'.join(channelIds) + '"]', len(channelIds))
As #Benjamin Loison has mentioned, there is a quota on the limit on the usage of the API. If you'd like to raise the limit, I think there is a form you can fill out to request more. However, I don't recommend you do so since the form is applicable mainly for a large application that will be used for a long time and involve a long process of human inspection on what you're trying to build (This is based on my personal experience, might not be entirely accurate).
My suggestion would be to use the script you have to print out a list of channel links, and you can click into each of them and press the subscribe button. 470-ish channels should not take you a long time.
I'm trying to get make an API for the first time and I've made my app but it says I have to do a local authentication with instructions here:
Link to TDAmeritrade authentication
But it says I have to go on https://auth.tdameritrade.com/auth?response_type=code&redirect_uri={URLENCODED REDIRECT URI}&client_id={URLENCODED Consumer Key}%40AMER.OAUTHAP where I plug in the "url encoded redirect uri" and "urlencoded consumer key" and I dont know how to get the URI. Let's say if I'm using local host 1111 do i just plug in "localhost:1111"? because that didnt work
Perhaps that doesn't even matter? because I was writing the following:
import requests
from config import consumer_key
#daily prices generator
endpoint = "https://api.tdameritrade.com/v1/marketdata/{}/pricehistory".format("AAPL")
#parameters
import time
timeStamp=time.time()
timeStamp=int(timeStamp)
parameters = {'api_key':consumer_key,
'periodType':'day',
'frequencyType':"minute",
'frequency':'5',
'period':'1',
'endDate':str(timeStamp+86400),
'startDate':str(timeStamp),
'extendedHourData':'true'}
#caller
stuff = requests.get(url = endpoint, params = parameters)
#reformater
lister = stuff.json()
lister
which returned "{'error': 'The API key in request query param is either null or blank or invalid.'}"
TDA has some rules
timeStamp needs to be in milliseconds
Can only get past 31 days in minute format
There is also some format constraints.
frequenceType=minute --> then use periodType=day
frequencyType=daily --> then use periodType=month
I am trying to use Etsy API to add a new listing on my store. In the documents section it says (below section how to do it). First fyi I have never used HTTP Method before so I am not sure how to setup the code so that it adds a new item.
(Link to the Etsy API page https://www.etsy.com/developers/documentation/reference/listing).
Method Name createListing
Synopsis Creates a new Listing.
HTTP Method POST
URI /listings
Parameters
Name Required Default Type
quantity Y int
title Y string
description Y text
price Y float
materials N array(string)
shipping_template_id N int
shop_section_id N int
image_ids N array(int)
is_customizable N boolean
non_taxable N boolean
image N image
state N active enum(active, draft)
processing_min N int
processing_max N int
category_id N int
taxonomy_id N int
tags N array(string)
who_made Y enum(i_did, collective, someone_else)
is_supply Y boolean
when_made Y enum(made_to_order, 2010_2017, 2000_2009, 1998_1999, before_1998, 1990_1997, 1980s, 1970s, 1960s, 1950s, 1940s, 1930s, 1920s, 1910s, 1900s, 1800s, 1700s, before_1700)
recipient N enum(men, women, unisex_adults, teen_boys, teen_girls, teens, boys, girls, children, baby_boys, baby_girls, babies, birds, cats, dogs, pets, not_specified)
occasion N enum(anniversary, baptism, bar_or_bat_mitzvah, birthday, canada_day, chinese_new_year, cinco_de_mayo, confirmation, christmas, day_of_the_dead, easter, eid, engagement, fathers_day, get_well, graduation, halloween, hanukkah, housewarming, kwanzaa, prom, july_4th, mothers_day, new_baby, new_years, quinceanera, retirement, st_patricks_day, sweet_16, sympathy, thanksgiving, valentines, wedding)
style N array(string)
Requires OAuth Y
Permission Scope listings_w
Notes
A shipping_template_id is required when creating a listing.
All listings created on www.etsy.com must be actual items for sale. Please see our guidelines for testingwith live listings.
Creating a listing creates a single inventory products with the supplied price and quantity. Use updateInventory to add more products.
The code I have right know looks like this
import urllib
import requests
url = 'https://openapi.etsy.com/v2/listings/active?api_key={YOUR KEY HERE)' # I put my API key here
r = requests.get(url)
payload = {'quantity': '1', 'title': 'testdfsdfdfs0','description': 'dfsdfsdfsdfdsf','price': '2.55','who_made': 'i_did','is_supply': '0','when_made': '2010_2017'}
rrr = requests.post(url,payload)
print rrr # I get an error 404
How can I add an item for sale on Etsy through Python HTTP method?
Update
from requests_oauthlib import OAuth1Session
import requests
from requests_oauthlib import OAuth1
import json
tempory_token_url = []
oauth_response_bucket = []
client_key = '.......'
client_secret = '......'
oauth = OAuth1Session(client_key, client_secret=client_secret)
request_token_url = 'https://openapi.etsy.com/v2/oauth/request_token?scope=email_r%20listings_r'
fetch_response = oauth.fetch_request_token(request_token_url)
resource_owner_key = fetch_response.get('oauth_token') # Have it
resource_owner_secret = fetch_response.get('oauth_token_secret')
oauth_url_temp = tempory_token_url[0]['login_urI']
base_authorization_url = oauth_url_temp
authorization_url = oauth.authorization_url(base_authorization_url)
redirect_response = raw_input('Paste the full redirect URL here: ')
oauth_response = oauth.parse_authorization_response(redirect_response)
verifier = oauth_response.get('oauth_verifier')
access_token_url = redeirect_response
oauth = OAuth1Session(client_key=client_secret=client_secret,resource_owner_key=resource_owner_key,resource_owner_secret=resource_owner_secret,verifier=verifier)
oauth_tokens = oauth.fetch_access_token(access_token_url)
resource_owner_key = oauth_tokens.get('oauth_token')
resource_owner_secret = oauth_tokens.get('oauth_token_secret')
Any ideas how to make this work? There is very little info regarding Etsy API and most of the stuff is in PHP which I have no clue how to work.
Image Uploading API
Everything looks the same like above this time I just changed the payload but I am getting a 403 Error. I am not sure what is causing it. My best guess would be something with oauth1.0 i think on their website it says you need oauth 1.1.
Here is how I set it up but I am getting 403 error:
url = 'https://openapi.etsy.com/v2/listings'
payload = {'listing_id':'342434342', 'image': ("test1.jpg", open('C:\\Users\\abc\\test1.jpg'),'image/jpeg'),'type':'image/jpeg'}
result = etsy.put(url, params=payload)
print result
Comment: ... at this point I am lost I have no idea where to put the pin# that etsy gave me
etsy oauth#reference
The token credentials you receive for a account do not expire,
and can be used over and over again to make authenticated API requests.
You should keep the token secret in a secure location and never send it as a plaintext parameter
(it's only used for signing your requests, and never needs to be sent in an API request on its own.)
You will not need to step through the OAuth authorization again,
unless you decides to revoke access, or unless you add features that require additional permission scopes.
Note: Didn't find a equivalent Replacement for PHP OAUTH_AUTH_TYPE_URI.
OAuth1Session Defaults to signature_type=u'AUTH_HEADER', so this could be wrong.
If this fails, you could try:
from oauthlib.oauth1 import SIGNATURE_TYPE_QUERY, SIGNATURE_TYPE_BODY
OAuth1Session(..., signature_type=SIGNATURE_TYPE_QUERY)
Create etsy OAuth1Session to reuse for Requests:
etsy = OAuth1Session(client_key,
client_secret=client_secret,
resource_owner_key=resource_owner_key,
resource_owner_secret=resource_owner_secret)
etsy Making an Authorized Request to the API:
response = etsy.get("https://openapi.etsy.com/v2/users/__SELF__")
user_data = json.loads(response.body_as_unicode())
etsy Checking Permission Scopes After Authentication:
response = etsy.get("https://openapi.etsy.com/v2/oauth/scopes")
meta = json.loads(response.body_as_unicode())
etsy Creates a new Listing
url = 'https://openapi.etsy.com/v2/listings'
payload = {'quantity': '1', 'title':...}
result = etsy.post(url, params=payload)
Comment: for api key do I need to import oauth2
According to Reference, Yes.
For write access and for accessing private user data, an OAuth access
token is required. Your application key is required to start the OAuth
authentication process.
Requires OAuth Y
Also your url should end with
URI /listings
url = 'https://openapi.etsy.com/v2/listings'
Your url should only up to the Question mark, for example:
url = 'https://openapi.etsy.com/v2/listings/active'
payload = {'api_key':YOUR KEY HERE, 'quantity': '1', ...
rrr = requests.post(url, params=payload)
Requests Quickstart: Passing Parameters In URLs
You often want to send some sort of data in the URL's query string.
If you were constructing the URL by hand,
this data would be given as key/value pairs in the URL after a question mark, e.g. \http://bin.org/get?key=val.
Requests allows you to provide these arguments as a dictionary of strings, using the params keyword argument.
Question: I am trying to upload a picture ... getting a 403 error
Your url Endpoint and payload isn't correct.
url = 'https://openapi.etsy.com/v2/listings'
payload = {'listing_id':'342434342', 'image': ("test1.jpg", open('C:\\Users\\abc\\test1.jpg'),'image/jpeg'),'type':'image/jpeg'}
Steps to do a etsy Request(uploadListingImage):
Read the Reference for your Method Name
Method Name uploadListingImage
HTTP Method POST
URI /listings/:listing_id/images
Parameters Name Required Default Type
listing_id Y int
listing_image_id N int
image N imagefile
...
Requires OAuth Y
Respect Supported Sizes Working with Images
Note: For me, it's unclear what the image Parameter is for.
And as it's NOT required makes no sense.
I assume its a Placeholder for the Parameter at Point 4 below: {'image':...
Build the URI
uri = 'https://openapi.etsy.com/v2/listings/342434342/images'
Create the Params Dict according to the above Reference
I recommend to use a listing_image_id, as this seems the only way to delete a Image afterwards.
params = {'listing_id':'342434342', 'listing_image_id': 1}
Create Multipart-Encoded File Dict
Image uploads can be performed using a POST request with the Content-Type: multipart/form-dataheader, following RFC1867
# PHP example from Reference:
# $params = array('#image' => '#'.$source_file.';type='.$mimetype);
files = {'image': ("test1.jpg", open('C:\\Users\\abc\\test1.jpg', 'rb'), 'image/jpeg')}
Do the Request, according the Reference, you have to use OAuth and POST
result = etsy.post(uri, params=params, files=files)
Please Comment if this is working for you or why not.