I am querying price data from an API, using the request.get function in python.
the query looks like:
requests.get(url_specified_contract, headers=headers, params={'SymbolKeys':'NDX GNM FMZ0022!'}).json()
where 'SymbolKeys' identifies the contract I look for.
Several contracts are traded for specific delivery periods, with a similar SymbolKey that varies only in the latter part. For instance:
'NDX GNM FMZ0022!'
'NDX GNM FMZ0023!'
'NDX GNM FMZ0024!'
....
Since the changing component could vary depending on the commodity I look at, I would like an easy way to get all ids containing a specified string (in my example 'NDX GNM'), without knowing which ones exist in advance.
I was trying queries like:
requests.get(url_quotes, headers=headers, params={'SymbolKeys':'*NDX GNM*'}).json()
Without success ({'Error': 'Unknown symbol(s): NDX GNM. This/these symbol(s) will not be part of the result.'}
).
The only solution for the time being is a for loop over each possible integer like:
quotes = []
for i in range(1,300):
try:
temp = requests.get(url_quotes, headers=headers, params={'SymbolKeys':'NDX GNM M'+str(i)}).json()
except: pass
quotes.append(temp)
But this approach consumes a large amount of daily queries.
Could you suggest me a possible solution?
thanks in advance, cheers.
Related
I have a similar problem as in this question (Problem with getting user.fields from Twitter API 2.0)
but I am using Tweepy. When making the request with tweet_fields, the response is only giving me the default values. In another fuction where I use user_fields it works perfectly.
I followed this guide, specifically number 17 (https://dev.to/twitterdev/a-comprehensive-guide-for-using-the-twitter-api-v2-using-tweepy-in-python-15d9)
My function looks like this:
def get_user_tweets():
client = get_client()
tweets = client.get_users_tweets(id=get_user_id(), max_results=5)
ids = []
for tweet in tweets.data:
ids.append(str(tweet.id))
tweets_info = client.get_tweets(ids=ids, tweet_fields=["public_metrics"])
print(tweets_info)
This is my response (with the last tweets from elonmusk) also there is no error code or anything else
Response(data=[<Tweet id=1471419792770973699 text=#WholeMarsBlog I came to the US with no money & graduated with over $100k in debt, despite scholarships & working 2 jobs while at school>, <Tweet id=1471399837753135108 text=#TeslaOwnersEBay #PPathole #ScottAdamsSays #johniadarola #SenWarren It’s complicated, but hopefully out next quarter, along with Witcher. Lot of internal debate as to whether we should be putting effort towards generalized gaming emulation vs making individual games work well.>, <Tweet id=1471393851843792896 text=#PPathole #ScottAdamsSays #johniadarola #SenWarren Yeah!>, <Tweet id=1471338213549744130 text=link>, <Tweet id=1471325148435394566 text=#24_7TeslaNews #Tesla ❤️>], includes={}, errors=[], meta={})
I found this link: https://giters.com/tweepy/tweepy/issues/1670. According to it,
Response is a namedtuple. Here, within its data field, is a single Tweet object.
The string representation of a Tweet object will only ever include its ID and text. This was an intentional design choice, to reduce the excess of information that could be displayed when printing all the data as the string representation, as with models.Status. The ID and text are the only default / guaranteed fields, so the string representation remains consistent and unique, while still being concise. This design is used throughout the API v2 models.
To access the data of the Tweet object, you can use attributes or keys (like a dictionary) to access each field.
If you want all the data as a dictionary, you can use the data attribute/key.
In that case, to access public metrics, you could maybe try doing this instead:
tweets_info = client.get_tweets(ids=ids, tweet_fields=["public_metrics"])
for tweet in tweets_info.data:
print(tweet["id"])
print(tweet["public_metrics"])
Trying to fetch my Binance accounts order history with the python-binance module. There is an option to get all orders within one symbol (see documentation):
orders = client.get_all_orders(symbol='BNBBTC', limit=10)
Bu the problem is I can't pass more than 1coin in the symbol parameter
How can I pass a list for the symbol parameter, I want to fetch more than 1 coins order history in a single function
as I'm trying to build a portfolio for my Binance account. Or is there another method to do so?
Currently it is not possible to get all historical orders or trades without specifying the symbol in one call, even without the module python-binance.
There is an ongoing discussion on the Binance forum, requesting this feature.
As a workaround:
If you know your ordered symbols: Use the function get_all_orders() multiple times for each symbol in a loop.
If you don't know your ordered symbols: you could send a GET request for each symbol available at Binance (as mentioned in the discussion linked above). But be careful with the rateLimits.
I was asking myself the same thing. Well, a work around would be to iterate over all the tickers available in Binance looking for the ones we did traded in the past.
If you are working the free plan of the API, the best would be to setup a storage file or database and store all the result. Then you have to care about keeping with the changes from there.
Yeah, it is exactly how I am going to deal with this.
(edit) :
Sleep function will be needed to avoid more than 1200 queries per minute.
(example) :
def getAllTickers(self):
# Get all available exchange tickers
exchangeInfo = self.client.get_exchange_info()
# Extract the tickers general info
exchangeSymbols = []
for i in exchangeInfo['symbols']:
exchangeSymbols.append(i)
return exchangeSymbols
def getMyTrades(self, strSymbol):
return self.client.get_my_trades(symbol=strSymbol)
def getMyTradedTickers(self):
tickers = self.getAllTickers()
# Extract every ticker where trade happened
traded = []
for i in tickers:
tickerTransactions = self.getMyTrades(i["symbol"])
if tickerTransactions :
traded.append(tickerTransactions)
print(i["symbol"], " transactions available")
else :
print(i["symbol"], " has no transactions")
self.time.sleep(0.1)
return traded
**Srry for the code quality. Python is not my main coding language and im getting use to it.
I want to retrieve all available phone voice phone numbers (Phone Numbers only) using a pattern search without all the parameters.
I have tried the api code given by nexmo. It works, but I only get a limited amount of phone numbers and I am also getting a bunch of other parameters, I don't want. here are the 2 api calls I am using:
phnumbers = client.get_available_numbers("US", {"features": "VOICE"})
phnumbers = client.get_available_numbers("US", {"pattern": "007", "search_pattern": 2})
I just want to have a list of available numbers. I don;t care if it's 1000. Not sure if there is a way to limit the number it brings back. Currently getting a limited amount of number with parameters like the following:
{'count': 394773, 'numbers': [{'country': 'US', 'msisdn': '12014790696', 'cost': '0.90', 'type': 'mobile-lvn', 'features': ['VOICE', 'SMS']}
That's one number. I only want to tell it give me all the voice numbers and get them in a list...Thank you in advance for your help.
I looked at the docs and I don't think it's possible to only get the phone number (also called msisdn) back.
Instead, for each number, you'll get a thing which includes country, cost, type, etc... , part of, as the docs say, "A paginated array of available numbers and their details".
If you look at the response, you can see that you get count as the first key/value pair, in your example the count is 394773, and this is the total count of numbers available for the search condition you specified when you made the request.
Now, I don't know all the reasons but to send back one response with a payload of 394773 numbers would probably be taxing too much the system.
What you can do:
From my tests, if you specify a size of 100, then you'll get a response with 100 records per page and you have the index parameter which you can use to paginate (anything above 100 for size and you get only 10 records).
So, if the count is 394773 for your search query, with size = 100, we have 3947 + 1 pages (the last page (index = 3948) only has 73 records) and you would have to get them one by one with a total of 3948 requests passing the appropriate index value.
Of course you can reduce the count if you pass a more specific search query.
I understand what you want, and I don't work for Nexmo, and again, after reading the docs I don't think it's possible to get everything back in just one request. You'll just need to be more specific in your search query.
Docs:
Retrieve inbound numbers that are available for the specified country.
I'm new to Python and having some trouble with an API scraping I'm attempting. What I want to do is pull a list of book titles using this code:
r = requests.get('https://api.dp.la/v2/items?q=magic+AND+wizard&api_key=09a0efa145eaa3c80f6acf7c3b14b588')
data = json.loads(r.text)
for doc in data["docs"]:
for title in doc["sourceResource"]["title"]:
print (title)
Which works to pull the titles, but most (not all) titles are outputting as one character per line. I've tried adding .splitlines() but this doesn't fix the problem. Any advice would be appreciated!
The problem is that you have two types of title in the response, some are plain strings "Germain the wizard" and some others are arrays of string ['Joe Strong, the boy wizard : or, The mysteries of magic exposed /']. It seems like in this particular case, all lists have length one, but I guess that will not always be the case. To illustrate what you might need to do I added a join here instead of just taking title[0].
import requests
import json
r = requests.get('https://api.dp.la/v2/items?q=magic+AND+wizard&api_key=09a0efa145eaa3c80f6acf7c3b14b588')
data = json.loads(r.text)
for doc in data["docs"]:
title = doc["sourceResource"]["title"]
if isinstance(title, list):
print(" ".join(title))
else:
print(title)
In my opinion that should never happen, an API should return predictable types, otherwise it looks messy on the users' side.
Currently I have a mongo document that looks like this:
{'_id': id, 'title': title, 'date': date}
What I'm trying is to search within this document by title, in the database I have like 5ks items which is not much, but my file has 1 million of titles to search.
I have ensure the title as index within the collection, but still the performance time is quite slow (about 40 seconds per 1000 titles, something obvious as I'm doing a query per title), here is my code so far:
Work repository creation:
class WorkRepository(GenericRepository, Repository):
def __init__(self, url_root):
super(WorkRepository, self).__init__(url_root, 'works')
self._db[self.collection].ensure_index('title')
The entry of the program (is a REST api):
start = time.clock()
for work in json_works: #1000 titles per request
result = work_repository.find_works_by_title(work['title'])
if result:
works[work['id']] = result
end = time.clock()
print end-start
return json_encoder(request, works)
and find_works_by_title code:
def find_works_by_title(self, work_title):
works = list(self._db[self.collection].find({'title': work_title}))
return works
I'm new to mongo and probably I've made some mistake, any recommendation?
You're making one call to the DB for each of your titles. The roundtrip is going to significantly slow the process down (the program and the DB will spend most of their time doing network communications instead of actually working).
Try the following (adapt it to your program's structure, of course):
# Build a list of the 1000 titles you're searching for.
titles = [w["title"] for w in json_works]
# Make exactly one call to the DB, asking for all of the matching documents.
return collection.find({"title": {"$in": titles}})
Further reference on how the $in operator works: http://docs.mongodb.org/manual/reference/operator/query/in/
If after that your queries are still slow, use explain on the find call's return value (more info here: http://docs.mongodb.org/manual/reference/method/cursor.explain/) and check that the query is, in fact, using an index. If it isn't, find out why.