How do I retrieve available phone numbers in Nexmo? - python

I want to retrieve all available phone voice phone numbers (Phone Numbers only) using a pattern search without all the parameters.
I have tried the api code given by nexmo. It works, but I only get a limited amount of phone numbers and I am also getting a bunch of other parameters, I don't want. here are the 2 api calls I am using:
phnumbers = client.get_available_numbers("US", {"features": "VOICE"})
phnumbers = client.get_available_numbers("US", {"pattern": "007", "search_pattern": 2})
I just want to have a list of available numbers. I don;t care if it's 1000. Not sure if there is a way to limit the number it brings back. Currently getting a limited amount of number with parameters like the following:
{'count': 394773, 'numbers': [{'country': 'US', 'msisdn': '12014790696', 'cost': '0.90', 'type': 'mobile-lvn', 'features': ['VOICE', 'SMS']}
That's one number. I only want to tell it give me all the voice numbers and get them in a list...Thank you in advance for your help.

I looked at the docs and I don't think it's possible to only get the phone number (also called msisdn) back.
Instead, for each number, you'll get a thing which includes country, cost, type, etc... , part of, as the docs say, "A paginated array of available numbers and their details".
If you look at the response, you can see that you get count as the first key/value pair, in your example the count is 394773, and this is the total count of numbers available for the search condition you specified when you made the request.
Now, I don't know all the reasons but to send back one response with a payload of 394773 numbers would probably be taxing too much the system.
What you can do:
From my tests, if you specify a size of 100, then you'll get a response with 100 records per page and you have the index parameter which you can use to paginate (anything above 100 for size and you get only 10 records).
So, if the count is 394773 for your search query, with size = 100, we have 3947 + 1 pages (the last page (index = 3948) only has 73 records) and you would have to get them one by one with a total of 3948 requests passing the appropriate index value.
Of course you can reduce the count if you pass a more specific search query.
I understand what you want, and I don't work for Nexmo, and again, after reading the docs I don't think it's possible to get everything back in just one request. You'll just need to be more specific in your search query.
Docs:
Retrieve inbound numbers that are available for the specified country.

Related

query all elements containing selected string symbolkeys from API

I am querying price data from an API, using the request.get function in python.
the query looks like:
requests.get(url_specified_contract, headers=headers, params={'SymbolKeys':'NDX GNM FMZ0022!'}).json()
where 'SymbolKeys' identifies the contract I look for.
Several contracts are traded for specific delivery periods, with a similar SymbolKey that varies only in the latter part. For instance:
'NDX GNM FMZ0022!'
'NDX GNM FMZ0023!'
'NDX GNM FMZ0024!'
....
Since the changing component could vary depending on the commodity I look at, I would like an easy way to get all ids containing a specified string (in my example 'NDX GNM'), without knowing which ones exist in advance.
I was trying queries like:
requests.get(url_quotes, headers=headers, params={'SymbolKeys':'*NDX GNM*'}).json()
Without success ({'Error': 'Unknown symbol(s): NDX GNM. This/these symbol(s) will not be part of the result.'}
).
The only solution for the time being is a for loop over each possible integer like:
quotes = []
for i in range(1,300):
try:
temp = requests.get(url_quotes, headers=headers, params={'SymbolKeys':'NDX GNM M'+str(i)}).json()
except: pass
quotes.append(temp)
But this approach consumes a large amount of daily queries.
Could you suggest me a possible solution?
thanks in advance, cheers.

How to do a commparator that selcet certain certain row in function of characteristics?

So I'm trying to do a subscription comparator based on a google sheets. The g sheets has subscription characteristics in the first row and every other row is a subscription (g sheets so you can see what I mean) I have tried differents method but they didn't suceed with any of them (this one was great but it sent too many request and therefore the API blocked me after 100 request /100 seconds. I also tried to apply the link/unlink method for batching commands but it didn't work either).
Therefore I would need your help. I have very litlle knowledge in python though. I have downloaded pygsheets but really I don't care of what I need to do to suceed.
If you want an example it would be something like that : User typing whatv he want for price, GB of mobile data, etc and it return subscription that match his criterias. Here I am just asking to solve the first part which is to have rows that match criterias; The user enter a criterias problem is not for now.
I am not sure what you meant by a subscription comparator. Anyway based on your linked quetion, I have updated the accepted answer to reduce api calls.
#list of all values in 4th/price column
prices=wks.get_cols(4)
#Remove nonnumeric characters from prices
prices=[p.replace('*','') for p in prices[1:]]
#Get indices of rows with price >=50
##i+2 to account for one indexing and removing header row
indices=[i+2 for i,p in enumerate(prices) if float(p)>=50]
#get these rows
rows = wks.get_values_batch([(str(x), None) for x in indices])

How to get the get the total number of items in a single G Suite API query?

I am developing a simple app to consume data from some G Suite APIs (Admin SDK, Drive, Gmail, etc.).
G Suite API endpoints allowing the list method (for collections) provide queries with a response of the following kind (content may vary from API to API):
{
"kind": "admin#directory#users",
"etag": "\"WczyXiapC9UmAQ6oKabcde6P59w-7argQ83zwDwKoUE/zsH-hyZTP1lFsB3-wabK4_8VXMk\"",
"users": [
{
"kind": "admin#directory#user",
"id": "137674315191655104007",
"etag": "\"WczyXiapC9..."
...
},
...
# N elements of type 'user', where N <= maxResults,
# being <maxResults>, the maximum number of elements in the response per query.
# <maxResults> has a system default value.
]
}
In order to get the total number of available elements for consumption in that API, I may encounter the following cases:
One single query if the total number of available elements is less or equal than maxResults.
More than one if the total number of available elements is greater than maxResults.
When number two occurs, the G Suite API returns a pagination token which I will use in successive queries to retrieve more pages with up to maxResults elements.
Once I have consumed all the elements I can get the total number.
My question is:
Is it possible, to retrieve the total number of elements (just the integer value) in the query with a single API call and thus, avoid pagination?
Thank you for your answers.
Is it possible, to retrieve the total number of elements (just the integer value) in the query with a single API call and thus, avoid pagination?
if a method contains a parameter called MaxResults that is because it has a maximum number of rows that a call can return.
If you look at the documentation for the Google drive api file.list method
The maximum number of files to return per page. Partial or empty result pages are possible even before the end of the files list has been reached. Acceptable values are 1 to 1000, inclusive. (Default: 100)
This means that it can return to you a maximum of 1000 files then you will need to paginate. There is no way around this limitation in the api.

Tumblr API paging bug when fetching followers?

I'm writing a little python app to fetch the followers of a given tumblr, and I think I may have found a bug in the paging logic.
The tumblr I am testing with has 593 followers and I know the API is block limited to 20 per call. After successful authentication, the fetch logic looks like this:
offset = 0
while True:
response = client.followers(blog, limit=20, offset=offset)
bunch = len(response["users"])
if bunch == 0:
break
j = 0
while j < bunch:
print response["users"][j]["name"]
j = j + 1
offset += bunch
What I observe is that on the third call into the API with offset=40, the first name returned on the list is one I saw in the previous group. It's actually the 38th name. This behavior (seeing one or more names I've seen before) repeats randomly from that point on, though not in every call to the API. Some calls give me a fresh 20 names. It's repeatable across multiple test runs. The sequence I see them in is the same as on Tumblr's site, I just see many of them twice.
An interesting coincidence is that the total number of of non-unique followers returned is the same as what the "Followers" count indicates on the blog itself (593). But only 516 of them are unique.
For what it's worth, running the query on Tumblr's console page returns the same results regardless of the language I choose, so I'm not inclined to think this is a bug in the PyTumblr client, but something lower, at the API level.
Any ideas?

Python win32com.adsi module limits number of returned members from AD

Using the following code...
import win32com.adsi
DNC=win32com.adsi.ADsGetObject('LDAP://rootDSE').Get('DefaultNamingContext')
path = 'LDAP://cn=BIG_GROUP,ou=Groups,'+DNC
groupobj = win32com.adsi.ADsGetObject(path)
users = groupobj.member
print len(users)
The output is always a maximum 1500, even if BIG_GROUP contains several thousand members. How can I execute this query in a way that returns all members of BIG_GROUP?
AD returns N results at a time from a large attribute (like member), where N is the max range retrieval size. The directory supports something called ranged retrieval where you can fetch groupings of up to 1500 values per fetch.
You should use the ranged retrieval control against the directory. I don't know if your LDAP API supports this but the docs should answer.
Here is a bit more in the way of info, from the MSFT docs

Categories