How to get PartitionKeyRangeId of a container in Azure Cosmos Python SDK - python

I am trying to implement Pull Model to query change feed using Azure Cosmos Python SDK. I found that to parallelise the querying process, the official documentation mentions about FeedRange value and create FeedIterator to iterate through each range of partition key values obtained from the FeedRange.
Currently my code snippet to query change feed looks like this and it is pretty straight-forward:
# function to get items from change feed based on a condition
def get_response(container_client, condition):
# Historical data read
if condition:
response = container.query_items_change_feed(
is_start_from_beginning = True,
# partition_key_range_id = 0
)
# reading from a checkpoint
else:
response = container.query_items_change_feed(
is_start_from_beginning = False,
continuation = last_continuation_token
)
return response
The problem with this approach is the efficiency when getting all the items from beginning (Historical Data Read). I tried this method with pretty small dataset of 500 items and the response took around 60 seconds. When dealing with millions or even billions of items the response might take too long to return.
Would querying change feed parallelly for each partition key range save time?
If yes, how to get PartitionKeyRangeId in Python SDK?
Is there any problems I need to consider when implementing this?
I hope I make sense!

Related

Getting track's URI's from Spotify API using Spotipy?

I am working in collecting a data set that crossreferences a track's audio features and the Billboard's chart data set available on Kaggle. I am trying to get each song's URI in order to then get its audio features, and I defined the following function:
def get_track_uri(track_title, sp):
result = sp.search(track_title, type="track", limit=1)
if result['tracks']['total'] > 0:
track_uri = result['tracks']['items'][0]['uri']
return track_uri
else:
return None
and then it goes through the Billboard's 'song' column in order to create a new column with the URIs.
cleandf['uri'] = cleandf['song'].apply(lambda x: get_track_uri(x, sp))
So, I left it running for about 40 min and I noticed that it got stuck in a sleep method from Spotipy which I gathered was because I was making a lot of requests to the Spotify API. How can I be able to go around this if I'm trying to go through 50,000 rows? I could maybe make it wait between search queries but it will easily take what, 15 hours? Also, there probably is a way to directly get the audio features without me getting the URI's, but it still would need to go through all of the rows.

How to speed up the process of converting QuerySet object into JSON array in Python

I am using exchangelib to fetch emails. My function retrieves nearly 150000 emails(output is of type query set) in a second. I have to convert query set into JSON array for processing it further. Right now it is taking about an hour to convert QuerySet obj into JSON array, I want this conversion to happen in seconds though.
Sample code :
MailsArray = []
query_filter = Q(sender=xyz#abc.com)
timeLimit = UTC_NOW() - timedelta(hours=1)
# This step returns data in seconds
Inbox_mails = account.inbox.all().filter(query_filter,datetime_received_gt=timeLimit).only('subject','sender','conversation_id')
# This step takes a lot of time
for x in Inbox_mails:
MailsArray.append( {"Subject":x.subject,"ID":x.conversation_id.id})
Any ideas on converting QuerySet data into JSON array fastly would be appreciated
As Jeronimo hinted, the creation of Inbox_mails doesn't actually fetch the emails. It's just a queryset definition, and the actual fetching of items happens when you iterate over the Inbox_mails object. Your problem is not about the conversion to JSON, but rather fetching the data.
150.000 emails is a lot, and it's probably going to be slow no matter how you do this. But you can try changing the paging size used to fetch items. See https://ecederstrand.github.io/exchangelib/#paging how to do that.

Efficient way to get data from lotus notes view

I am trying to get all data from view(Lotus Notes) with lotusscript and Python(noteslib module) and export it to csv, but problem is that this takes too much time. I have tried two ways with loop through all documents:
import noteslib
db = noteslib.Database('database','file.nsf')
view = db.GetView('My View')
doc = view.GetFirstDocument()
data = list()
while doc:
data.append(doc.ColumnValues)
doc = view.GetNextDocument(doc)
To get about 1000 lines of data it took me 70 seconds, but view has about 85000 lines so get all data will be too much time, because manually when I use File->Export in Lotus Notes it is about 2 minutes to export all data to csv.
And I tried second way with AllEntries, but it was even slower:
database = []
ec = view.AllEntries
ent = ec.Getfirstentry()
while ent:
row = []
for v in ent.Columnvalues:
row.append(v)
database.append(row)
ent = ec.GetNextEntry(ent)
Everything that I found on the Internet is based on "NextDocument" or "AllEntries". Is there any way to do it faster?
It is (or at least used to be) very expensive from a time standpoint to open a Notes document, like you are doing in your code.
Since you are saying that you want to export the data that is being displayed in the view, you could use the NotesViewEntry class instead. It should be much faster.
Set col = view.AllEntries
Set entry = col.GetFirstEntry()
Do Until entry Is Nothing
values = entry.ColumnValues '*** Array of column values
'*** Do stuff here
Set entry = col.GetNextEntry(entry)
Loop
I wrote a blog about this back in 2013:
http://blog.texasswede.com/which-is-faster-columnvalues-or-getitemvalue/
Something is going on with your code "outside" the view navigation: You already chose the most performant way to navigate a view using "GetFirstDocument" and "GetNextDocument". Using the NotesViewNavigator as mentioned in the comments will be slightly better, but not significant.
You might get a little bit of performance out of your code by setting view.AutoUpdate = False to prohibit the view object to refresh when something in the backend changes. But as you only read data and not change view data that will not give you much of a performance boost.
My suggestion: Identify the REAL bottleneck of your code by commenting out single sections to find out when it starts to get slower:
First attempt:
while doc:
doc = view.GetNextDocument(doc)
Slow?
If not then next attempt:
while doc:
arr = doc.ColumnValues
doc = view.GetNextDocument(doc)
Slow?
If yes: ColumnValues is your enemy...
If not then next attempt:
while doc:
arr = doc.ColumnValues
data.append(arr)
doc = view.GetNextDocument(doc)
I would be very interested to get your results of where it starts to become slow.
I would suspect the performance issue is using COM/ActiveX in Python to access Notes databases. Transferring data via COM involves datatype 'marshalling', possibly at every step, and especially for 'out-of-process' method/property calls.
I don't think there is any way around this in COM. You should consider arranging a Notes 'agent' to do this for you instead (LotusScript or Java maybe). Even a basic LotusScript agent can export 000's of docs per minute. A further alternative may be to look at the Notes C-API (not an easy option and requires API calls from Python).

How to insert clean data into Firebase via REST API and Python

I have a simple Firebase that I mostly interact with via Javascript, which works really well. However, I also have a Python program that needs to get data from existing children and put/update data on existing children. I tried python-firebasin, which would do what I want, but it is unreliable (hangs, fails, etc.).
So I'm looking at the python-firebase REST wrapper. This seems efficient, and works well. However, every time I try to post() data, I get not just the data I'm posting, but some kind of unique string paired with it, all inserted as a child.
For example, via Javascript, I might say:
db = new Firebase('https://myfirebase.firebaseio.com/testval/');
db.transaction(function(current) { return 1; });
This would then give me a Firebase that looked like:
|---testval: 1
But when I try to do something similar with the Python Firebase REST wrapper, such as:
db = firebase.FirebaseApplication('https://myfirebase.firebaseio.com/')
db.post('/testval/',1)
My Firebase looks something like this:
|---testval:
|---JI4BiBbICSEAnM9mDXf: 1
In other words, it inserts a new child, gives it a new string, and then appends the data. Is there any way to insert/modify data on my Firebase using the REST wrapper that would do it cleanly like I'm doing with Javascript? Without adding children, without adding these unique strings?
Try this instead:
db.put(1)
db.post() is the equivalent of .push() in the JavaScript API, so it creates a unique ID for you. db.put() is equivalent to .set() and will just set the data, which appears to be what you want.
Note that there is no equivalent for transactions in the REST API, but your example was just using a transaction to do a .set() so hopefully you don't actually need them.
Try this:
db = firebase.FirebaseApplication('https://myfirebase.firebaseio.com/')
db.put('', 'testval', 1)
put takes three arguments : first is url or path, second is the key name or the snapshot name and third is the data(json)

GeoModel with Google App Engine - queries

I'm trying to use GeoModel python module to quickly access geospatial data for my Google App Engine app.
I just have a few general questions for issues I'm running into.
There's two main methods, proximity_fetch and bounding_box_fetch, that you can use to return queries. They actually return a result set, not a filtered query, which means you need to fully prepare a filtered query before passing it in. It also limits you from iterating over the query set, since the results are fetched, and you don't have the option to input an offset into the fetch.
Short of modifying the code, can anyone recommend a solution for specifying an offset into the query? My problem is that I need to check each result against a variable to see if I can use it, otherwise throw it away and test the next. I may run into cases where I need to do an additional fetch, but starting with an offset.
You can also work directly with the location_geocells of your model.
from geospatial import geomodel, geocell, geomath
# query is a db.GqlQuery
# location is a db.GeoPt
# A resolution of 4 is box of environs 150km
bbox = geocell.compute_box(geocell.compute(geo_point.location, resolution=4))
cell = geocell.best_bbox_search_cells (bbox, geomodel.default_cost_function)
query.filter('location_geocells IN', cell)
# I want only results from 100kms.
FETCHED=200
DISTANCE=100
def _func (x):
x.dist = geomath.distance(geo_point.location, x.location)
return x.dist
results = sorted(query.fetch(FETCHED), key=_func)
results = [x for x in results if x.dist <= DISTANCE]
There's no practical way to do this, because a call to geoquery devolves into multiple datastore queries, which it merges together into a single result set. If you were able to specify an offset, geoquery would still have to fetch and discard all the first n results before returning the ones you requested.
A better option might be to modify geoquery to support cursors, but each query would have to return a set of cursors, not a single one.

Categories