Redis python-rom object expiration. - python

I am working with flask and redis. I've decided to try the rom redis orm (http://pythonhosted.org/rom/) to manage some mildly complex data structures. I have a list of objects, lets say:
urls = ['www.google.com', 'www.example.com', 'www.python.org']
I also have the rom model:
class Stored_url(rom.Model):
url = rom.String(required=True, unique=True, suffix=True)
salt = rom.String()
hash = rom.String()
created_at = rom.Float(default=time.time)
This appears to be working on my dev setup. In my situation, i would like to start from scratch every day with some of the data and would like to set an expiration time for some objecta. I've looked through the documentation at http://pythonhosted.org/rom/rom.html# , but have not found a reference to expiration except in request caching. Is there a way to allow rom objects to expire?

Rom does not offer a built-in method automatic to automatically expire data. This is on purpose. I have explained the reasons why on 3 previous occasions:
https://github.com/josiahcarlson/rom/issues/40
https://github.com/josiahcarlson/rom/pull/47
https://github.com/josiahcarlson/rom/issues/62
TL;DR; Redis does not offer the internal mechanisms necessary to make this automatic (triggers). I provide 2 workarounds in the pull request linked above.

From rom documentation, it's better to create a new expire_at float column with index=True, the column can store when the entity is to expire. Then to expire the data, you can use: Model.query.filter(expire_at=(0, time.time())).limit(10) to (for example) get up to the 10 oldest entites that need to be expired.
https://josiahcarlson.github.io/rom/rom.html#expiring-models-ttls

Related

How to get ERC20 Token Transaction of a specific contract using Web3py

I'm using web3py and I want to get the transaction history of a specific contract. Here's my code sample
eventSignatureHash = web3.keccak(text='Transfer(address,uint256)').hex()
filter = web3.eth.filter({
'address': '0x828402Ee788375340A3e36e2Af46CBA11ec2C25e',
'topics': [eventSignatureHash]
})
I'm expected to get ERC20 Token Transactions related to this contract as found here but it does not display anything unfortunately. How to go about this?
Finally, is there a way to watch these transactions in real time?
What i did is that i created a contract instance:
contract = web3.eth.contract(address = contract_address)
then trasnfer_filter = contract.events.Transfer.filter(u have optional parameters such as: fromBlock:...,toBlock, argument_filters:{"to": users_address(this filters for transfers only to that address)})
so you can play around it.
https://web3py.readthedocs.io/en/latest/contracts.html#web3.contract.ContractEvents
found in the event log object section.
As is answered by other's answer, contract.events provides lots of useful methods. If nothing is returned, specifying from and to might help.
And besides, an ultimate solution is already provided here -> Advanced example: Fetching all token transfer events.
Finally, is there a way to watch these transactions in real time?
Actually, lots of nodes support subscribe RPCs. However, this feature is not yet supported by web3py(#1402). You can try SDK in other language or temporarily adopt this method here

Google App Engine: using users.get_current_user().user_id() as id of NDB entity

I am working on a web app based on Google App Engine (Webapp2/Python and ndb).
I am using the Google OAuth2 authentication and storing in a custom User Entity of my ndb the googleId I get through users.get_current_user().user_id().
Thus this entity has both a id (automatically provided by the ndb) and this property called googleId which is set by me.
I use this user object as common ancestor of the other used data I store in the ndb.
This approach is quite annoying because to avoid multiple ndb queries (one for each request) I have to store in session the id of the currently logged user, its googled id AND verify whether it is different from the currently logged user.
I have therefore thought to use the googleId as the KEY of the ndb entity and use it in the ancestor queries.
Like
mu = MyUser(id = users.get_current_user().user_id())
mu.put()
It works perfectly but I was wondering if there could be any valid reason not to do so (i.e. the googleId may be longer than the maximum size of the ndb id properties, etc.)
I'd need to see your MyUser model (and maybe some other code) to be more confident, but, assuming that's all in a pretty normal arrangement, I don't think you'll run into any trouble.
Datastore ids can be pretty long, and the user_id, in turn, shouldn't be incredibly big anyway (it's unfortunate that neither limit is rigorously documented, but I personally wouldn't unduly worry about either).

In Eve, what is the difference between inserting a document into a collection using the http method POST and using the mongo shell?

Background Information
The answer to my previous question (In Eve, how can you make a sub-resource of a collection and keep the parent collections endpoint?) was to use the multiple endpoints, one datasource feature of Eve. In the IRC channel, I was speaking with cuibonobo, and she was able to get this working by changing the game_id to be an objectid instead of a string, as shown here:
http://gist.github.com/uunsamp/d969116367181bb30731
I however didn't get this working, and as you can see from the conversation, I was putting documents into the collection differently:
14:59 < cuibonobo> no. it's just that since your previous settings file saved the game id as a string, the lookup won't work
15:00 < cuibonobo> it will only work on documents where game_id has been saved as an ObjectId
15:01 < cuibonobo> the way Eve currently works, if you set the type to 'objectid', it will convert the string to a Mongo ObjectId before saving it in the database. but that conversion doesn't happen with strings
15:02 < znn> i haven't been using eve for storing objects
15:02 < znn> i've been using the mongo shell interface for inserting items
15:03 < cuibonobo> oh. hmm. that might complicate things. Eve does type conversions and other stuff before inserting documents.
15:04 < cuibonobo> so inserting stuff directly into mongo generally isn't recommended
Question
Which leads me to stackoverflow :)
What is the difference between inserting a document into a collection using the http method POST and using the mongo shell? Will users eventually be able to use either method of document insertion?
Extra information
I was looking through http://github.com/nicolaiarocci/eve/blob/develop/eve/methods/post.py before asking this question, but this could take awhile to understand, much longer than just asking someone who maybe is more familiar with the code than myself.
The quick answer is that Eve is adding a few meta fields etag, updated, created along with every stored document. If you want to store documents locally (not going through HTTP) you can use post_internal:
Intended for internal post calls, this method is not rate limited,
authentication is not checked and pre-request events are not raised.
Adds one or more documents to a resource. Each document is validated
against the domain schema. If validation passes the document is inserted
and ID_FIELD, LAST_UPDATED and DATE_CREATED along with a link to the
document are returned. If validation fails, a list of validation issues
is returned.
Usage example:
from run import app
from eve.methods.post import post_internal
payload = {
"firstname": "Ray",
"lastname": "LaMontagne",
"role": ["contributor"]
}
with app.test_request_context():
x = post_internal('people', payload)
print(x)
Documents inserted with post_internal are subject to the same validation and will be stored like they were by API clients via HTTP. In 0.5-dev (not released yet) PATCH, PUT and DELETE internal methods have been added too.

Django : How to count number of people viewed

I'm making a simple BBS application in Django and I want it so that whenever someone sees a post, the number of views on that post (post_view_no) is increased.
At the moment, I face two difficulties:
I need to limit the increase in post_view_no so that one user can only increase it once regardless of how many times the user refreshes/clicks on the post.
I also need to be able to track the users that are not logged in.
Regards to the first issue, it seems pretty easy as long as I create a model called 'View' and check the db but I have a feeling this may be an overkill.
In terms of second issue, all I can think of is using cookies / IP address to track the users but IP is hardly unique and I cannot figure out how to use cookies
I believe this is a common feature on forum/bbs solutions but google search only turned up with plugins or 'dumb' solutions that increase the view each time the post is viewed.
What would be the best way to go about this?
I think you can do both things via cookies. For example, when user visits a page, you can
Check if they have “viewed_post_%s” (where %s is post ID) key set in their session.
If they have, do nothing. If they don't, increase view_count numeric field of your corresponding Post object by one, and set the key (cookie) “viewed_post_%s” in their session (so that it won't count in future).
This would work with both anonymous and registered users, however by clearing cookies or setting up browser to reject them user can game the view count.
Now using cookies (sessions) with Django is quite easy: to set a value for current user, you just invoke something like
request.session['viewed_post_%s' % post.id] = True
in your view, and done. (Check the docs, and especially examples.)
Disclaimer: this is off the top of my head, I haven't done this personally, usually when there's a need to do some page view / activity tracking (so that you see what drives more traffic to your website, when users are more active, etc.) then there's a point in using a specialized system (e.g., Google Analytics, StatsD). But for some specific use case, or as an exercise, this should work.
Just to offer a secondary solution, which I think would work but is also prone to gaming (if coming by proxy or different devices). I haven't tried this either but I think it should work and wouldn't require to think about cookies, plus you aggregate some extra data which is noice.
I would make a model called TrackedPosts.
class TrackedPosts(models.Model):
post = models.ForeignKey(Post)
ip = models.CharField(max_length=16) #only accounting for ipv4
user = models.ForeignKey(User) #if you want to track logged in or anonymous
Then when you view a post, you would take the requests ip.
def my_post_view(request, post_id):
#you could check for logged in users as well.
tracked_post, created = TrackedPost.objects.get_or_create(post__pk=id, ip=request.ip, user=request.user) #note, not actual api
if created:
tracked_post.post.count += 1
tracked_post.post.save()
return render_to_response('')

Alternative to singleton?

I'm a Python & App Engine (and server-side!) newbie, and I'm trying to create very simple CMS. Each deployment of the application would have one -and only one -company object, instantiated from something like:
class Company(db.Model):
name = db.StringPropery()
profile = db.TextProperty()
addr = db.TextProperty()
I'm trying to provide the facility to update the company profile and other details.
My first thought was to have a Company entity singleton. But having looked at (although far from totally grasped) this thread I get the impression that it's difficult, and inadvisable, to do this.
So then I thought that perhaps for each deployment of the CMS I could, as a one-off, run a script (triggered by a totally obscure URL) which simply instantiates Company. From then on, I would get this instance with theCompany = Company.all()[0]
Is this advisable?
Then I remembered that someone in that thread suggested simply using a module. So I just created a Company.py file and stuck a few variables in it. I've tried this in the SDK and it seems to work -to my suprise, modified variable values "survived" between requests.
Forgive my ignorance but, I assume these values are only held in memory rather than on disk -unlike Datastore stuff? Is this a robust solution? (And would the module variables be in scope for all invocations of my application's scripts?)
Global variables are "app-cached." This means that each particular instance of your app will remember these variables' values between requests. However, when an instance is shutdown these values will be lost. Thus I do not think you really want to store these values in module-level variables (unless they are constants which do not need to be updated).
I think your original solution will work fine. You could even create the original entity using the remote API tool so that you don't need an obscure page to instantiate the one and only Company object.
You can also make the retrieval of the singleton Company entity a bit faster if you retrieve it by key.
If you will need to retrieve this entity frequently, then you can avoid round-trips to the datastore by using a caching technique. The fastest would be to app-cache the Company entity after you've retrieved it from the datastore. To protect against the entity from becoming too out of date, you can also app-cache the time you last retrieved the entity and if that time is more than N seconds old then you could re-fetch it from the datastore. For more details on this option and how it compares to alternatives, check out Nick Johnson's article Storage options on App Engine.
It sounds like you are trying to provide a way for your app to be configurable on a per-application basis.
Why not use the datastore to store your company entity with a key_name? Then you will always know how to fetch the company entity, and you'll be able edit the company without redeploying.
company = Company(key_name='c')
# set stuff on company....
company.put()
# later in code...
company = Company.get_by_key_name('c')
Use memcache to store the details of the company and avoid repeated datastore calls.
In addition to memcache, you can use module variables to cache the values. They are cached, as you have seen, between requests.
I think the approach you read about is the simplest:
Use module variables, initialized in None.
Provide accessors (get/setters) for these variables.
When a variable is accessed, if its value is None, fetch it from the database. Otherwise, just use it.
This way, you'll have app-wide variables provided by the module (which won't be instantiated again and again), they will be shared and you won't lose them.

Categories