How to Limit Ratings (or anything else in Django) by IP? - python

I am using DjangoRatings for a web app which allows anonymous ratings from registered as well as nonregistered users. After I set the IPLimit integer in the DjangoRatings settings.py file, everything works fine; however, when I exceed the number of votes that I have allowed per IP, the entire web page gets reloaded with an “RaiseIPLimit()” error and the entire site goes down which necessitates reloading the previous page via back button. My question is, what can I add to my views.py file to tell django that when DjangoRatings passes the RaiseIPLimit() error, simply print something like “You can only vote once!” message to the user and leave the loaded web page as it is instead of crashing the entire site.
If there’s an easier way to do this general IP checking besides DjangoRatings, I am open to implementing other ways, but DjangoRatings just seems much easier than anything else since the only thing I need IP limits on is rating stuff. To be more clear, here is the exact error that DjangoRatings gives me:
IPLimitReached at /myapp/rating /page1
And this is straight from the DjangoRatings source code:
num_votes = Vote.objects.filter(
content_type=kwargs['content_type'],
object_id=kwargs['object_id'],
key=kwargs['key'],
ip_address=ip_address,
).count()
if num_votes >= getattr(settings, 'RATINGS_VOTES_PER_IP', RATINGS_VOTES_PER_IP):
raise IPLimitReached() ...
kwargs.update(defaults)
if use_cookies:
# record with specified cookie was not found ...
cookie = defaults['cookie'] # ... thus we need to replace old cookie (if presented) with new one
kwargs.pop('cookie__isnull', '') # ... and remove 'cookie__isnull' (if presented) from .create()'s **kwargs
rating, created = Vote.objects.create(**kwargs), True

Related

How to disable automatically generated fields in Python Eve?

How to disable fields _updated, _created, _etag, _links?
I want to limit bandwidth and those fields are bigger size than data which I actually need to get from my database (Mongodb)
With the exceptions of _links, which you can remove by disabling HATEOAS (HATEOAS = False), you can only rename the other meta fields.
While the framework itself won't remove them, you could hook up a custom callback and purge those fields yourself before the response is sent over the wire.
from eve import Eve
def on_fetched_resource(resource, response):
for document in response['_items']:
del(document['_etag'])
# etc.
app = Eve()
app.on_fetched_resource += on_fetched_resource
if __name__ == '__main__':
app.run()
Good question!
If you not too anxious about concurrency control which is for the user-side integrity, you can disable it, and so the _etag field. This can be done by adding a simple option in your settings.py:
IF_MATCH = False
This also may be handy if you want to edit database with an external tools or apps, as it lets you avoid doing additional jiggery-pokery with "_etag".
To understand whether you need ETag, check out: https://docs.python-eve.org/en/stable/features.html#data-integrity-and-concurrency-control
Also refer to Nicola Iarocci, who mentioned about disabling HATEOAS (HATEOAS = False) and a way to remove _etag field without disabling ETag checks. (Actually I wonder how you can check and post with latest _etag then?)

Using libsecret I can't get to the label of an unlocked item

I'm working on a little program that uses libsecret. This program should be able to create a Secret.Service...
from gi.repository import Secret
service = Secret.Service.get_sync(Secret.ServiceFlags.LOAD_COLLECTIONS)
... get a specific Collection from that Service...
# 2 is the index of a custom collection I created, not the default one.
collection = service.get_collections()[2]
... and then list all the Items inside that Collection, this by just printing their labels.
# I'm just printing a single label here for testing, I'd need all of course.
print(collection.get_items()[0].get_label())
An important detail is that the Collecction may initially be locked, and so I need to include code that checks for that possibility, and tries to unlock the Collection.
# The unlock method returns a tuple (number_of_objs_unlocked, [list_of_objs])
collection = service.unlock_sync([collection])[1][0]
This is important because the code I currently have can do all I need when the Collection is initially unlocked. However if the Collection is initially locked, even after I unlock it, I can't get the labels from the Items inside. What I can do is disconnect() the Service, recreate the Service again, get the now unlocked Collection, and this way I am able to read the label on each Item. Another interesting detail is that, after the labels are read once, I no longer required the Service reconnection to access them. This seems quite inelegant, so I started looking for a different solution.
I realized that the Collection inherited from Gio.DBusProxy and this class caches the data from the object it accesses. So I'm assuming that is the problem for me, I'm not updating the cache. This is strange though because the documentation states that Gio.DBusProxy should be able to detect changes on the original object, but that's not happening.
Now I don't know how to update the cache on that class. I've taken a look at some seahorse(another application that uses libsecret) vala code, which I wasn't able to completely decipher, I can't code vala, but that mentioned the Object.emit() method, I'm still not sure how I could use that method to achieve my goal. From the documentation(https://lazka.github.io/pgi-docs/Secret-1/#) I found another promising method, Object.notify(), which seems to be able to send notifications of changes that would enable cache updates, but I also haven't been able to properly use it yet.
I also posted on the gnome-keyring mailing list about this...
https://mail.gnome.org/archives/gnome-keyring-list/2015-November/msg00000.html
... with no answer so far, and found a bugzilla report on gnome.org that mentions this issue...
https://bugzilla.gnome.org/show_bug.cgi?id=747359
... with no solution so far(7 months) either.
So if someone could shine some light on this problem that would be great. Otherwise some inelegant code will unfortunately find it's way into my little program.
Edit-0:
Here is some code to replicate the issue in Python3.
This snippet creates a collection 'test_col', with one item 'test_item', and locks the collection. Note libsecret will prompt you for the password you want for this new collection:
#!/usr/bin/env python3
from gi import require_version
require_version('Secret', '1')
from gi.repository import Secret
# Create schema
args = ['com.idlecore.test.schema']
args += [Secret.SchemaFlags.NONE]
args += [{'service': Secret.SchemaAttributeType.STRING,
'username': Secret.SchemaAttributeType.STRING}]
schema = Secret.Schema.new(*args)
# Create 'test_col' collection
flags = Secret.CollectionCreateFlags.COLLECTION_CREATE_NONE
collection = Secret.Collection.create_sync(None, 'test_col', None, flags, None)
# Create item 'test_item' inside collection 'test_col'
attributes = {'service': 'stackoverflow', 'username': 'xor'}
password = 'password123'
value = Secret.Value(password, len(password), 'text/plain')
flags = Secret.ItemCreateFlags.NONE
Secret.Item.create_sync(collection, schema, attributes,
'test_item', value, flags, None)
# Lock collection
service = collection.get_service()
service.lock_sync([collection])
Then we need to restart the gnome-keyring-daemon, you can just logout and back in or use the command line:
gnome-keyrin-daemon --replace
This will setup your keyring so we can try opening a collection that is initially locked. We can do that with this code snippet. Note that you will be prompted once again for the password you set previously:
#!/usr/bin/env python3
from gi import require_version
require_version('Secret', '1')
from gi.repository import Secret
# Get the service
service = Secret.Service.get_sync(Secret.ServiceFlags.LOAD_COLLECTIONS)
# Find the correct collection
for c in service.get_collections():
if c.get_label() == 'test_col':
collection = c
break
# Unlock the collection and show the item label, note that it's empty.
collection = service.unlock_sync([collection])[1][0]
print('Item Label:', collection.get_items()[0].get_label())
# Do the same thing again, and it works.
# It's necessary to disconnect the service to clear the cache,
# Otherwise we keep getting the same empty label.
service.disconnect()
# Get the service
service = Secret.Service.get_sync(Secret.ServiceFlags.LOAD_COLLECTIONS)
# Find the correct collection
for c in service.get_collections():
if c.get_label() == 'test_col':
collection = c
break
# No need to unlock again, just show the item label
print('Item Label:', collection.get_items()[0].get_label())
This code attempts to read the item label twice. One the normal way, which fails, you should see an empty string, and then using a workaround, that disconnects the service and reconnects again.
I came across this question while trying to update a script I use to retrieve passwords from my desktop on my laptop, and vice versa.
The clue was in the documentation for Secret.ServiceFlags—there are two:
OPEN_SESSION = 2
establish a session for transfer of secrets while initializing the Secret.Service
LOAD_COLLECTIONS = 4
load collections while initializing the Secret.Service
I think for a Service that both loads collections and allows transfer of secrets (including item labels) from those collections, we need to use both flags.
The following code (similar to your mailing list post, but without a temporary collection set up for debugging) seems to work. It gives me the label of an item:
from gi.repository import Secret
service = Secret.Service.get_sync(Secret.ServiceFlags.OPEN_SESSION |
Secret.ServiceFlags.LOAD_COLLECTIONS)
collections = service.get_collections()
unlocked_collection = service.unlock_sync([collections[0]], None)[1][0]
unlocked_collection.get_items()[0].get_label()
I have been doing this
print(collection.get_locked())
if collection.get_locked():
service.unlock_sync(collection)
Don't know if it is going to work though because I have never hit a case where I have something that is locked. If you have a piece of sample code where I can create a locked instance of a collection then maybe I can help

Getting "If-Match or If-None-Match header or entry etag attribute required" errors when batch deleting contacts

I'm using the gdata Python library to do batched deletes of contacts, and I just get the "If-Match or If-None-Match header or entry etag attribute required" error.
I think the problem started when I had to enable the Contacts API in the console (which until a few days ago wasn't required? *).
EDIT:
It's actually failing for both updating and deleting operations. Batched insert works fine.
Tried specifying the If-Match header, but it's still failing:
custom_headers = atom.client.CustomHeaders(**{'If-Match': '*'})
request_feed = gdata.contacts.data.ContactsFeed()
request_feed.AddDelete(entry=contact, batch_id_string='delete')
response_feed = self.gd_client.ExecuteBatch(
request_feed,
'https://www.google.com/m8/feeds/contacts/default/full/batch',
custom_headers=custom_headers
)
Also created a ticket on the project page, but I doubt it will get any attention there.
EDIT 2:
Using the Batch method with force=True (which just adds the If-Match: * header) is the same result.
response_feed = self.gd_client.Batch(
request_feed,
uri='https://www.google.com/m8/feeds/contacts/default/full/batch',
force=True
)
* Can someone verify this? I never had to enable it in the console before and my app was able to use the Contacts API without problem, and I believe it wasn't even available before. I was surprised to see it yesterday.
Copying answer from the Google code ticket.
Basically, you need to patch the client's Post method to modify the request feed slightly. Here's one way to do it without directly modifying the library source:
def patched_post(client, entry, uri, auth_token=None, converter=None, desired_class=None, **kwargs):
if converter is None and desired_class is None:
desired_class = entry.__class__
http_request = atom.http_core.HttpRequest()
entry_string = entry.to_string(gdata.client.get_xml_version(client.api_version))
entry_string = entry_string.replace('ns1', 'gd') # where the magic happens
http_request.add_body_part(
entry_string,
'application/atom+xml')
return client.request(method='POST', uri=uri, auth_token=auth_token,
http_request=http_request, converter=converter,
desired_class=desired_class, **kwargs)
# when it comes time to do a batched delete/update,
# instead of calling client.ExecuteBatch, instead directly call patched_post
patched_post(client_instance, entry_feed, 'https://www.google.com/m8/feeds/contacts/default/full/batch')
The ticket referenced in the original post has some updated information and a temporary work around that allows batch deletes to succeed. So far it's working for me!
http://code.google.com/p/gdata-python-client/issues/detail?id=700
You can also specify the etag attribute to get around it. This works in the batch request payload:
<entry gd:etag="*" >
<batch:id>delete</batch:id>
<batch:operation type="delete"/>
<id> urlAsId </id>
</entry>

Python script to hide ploneformgen form after user has filled it out. (For Plone-4.3.2-64.)

After a user has filled out a (ploneformgen) form , I would like to use a custom script adapter to call a python script to change the user’s local role so that they can’t see the form anymore. In other words, I want to prevent the user from filling out (or viewing) the form twice.
I figured that one way to do this is to call the script permission_changer.py which is located in the form folder. The code I have in that script is this:
container.manage_delLocalRoles((‘bob',))
container.reindexObjectSecurity()
Where ‘bob’ is just an example user, who has only the global role FormFiller (which I created under the Security tab of the ZMI) and the local role “Reader” for the form folder.
When I fill out the form (which has a "private" state) as a system admin, the script is called successfully and bob loses his “Reader” local role (which is all he had to begin with), and he can’t see the form anymore. However, when bob fills out the form, a “You do not have sufficient privileges to view this page.” error is displayed, and bob’s local role is not removed. I can’t work out why –– and I’ve tried many different things:
I’ve changed the proxy for the permission_changer.py by clicking on “Proxy” tab for the script in ZMI. I changed it to “Manager”, "System Administrator”, and “Owner”, but that didn’t solve the problem (nor did any combination of those).
I tried changing the proxy by creating a file permission_changer.py.metdadata in the form folder and including this:
[default]
proxy = Manager
but that didn’t work either.
Strangely, when I change bob’s global role to Manager, or System Administrator, or even Viewer, or Editor, the problem goes away and the script runs just fine (I can also change the script so that it adds and removes arbitrary other local roles). (These options are not solutions for me because bob will still be able to see the form because of his global role.)
Also, I tried giving the role FormFiller role every possible permission under the Security tab, but didn’t work.
So, I’m guessing that the problem has to do with the proxy settings, but I can’t work out what I’m doing wrong. I've searched around a lot, and I can't find anyone discussing a similar problem.
Any help would be much appreciated!
Ugly ugly way to handle this may be to access to the data saver field's download method and parse its output to find data to check.
For example, if username is the second pfg field added into form, a custom script adapter that prevents furthers fillings by a user may be
alreadyInDB = False
savedData = ploneformgen.savefield.getSavedFormInputForEdit()
username = request.AUTHENTICATED_USER.getId()
usersInDB = [x.split(',')[1] for x in savedData.split('\r\n') if len(x)>0]
if username in usersInDB:
alreadyInDB = True
if alreadyInDB:
return {'username': 'No way man!'}
I worked out what was going on, but I'm not sure how to describe it precisely. Basically, I found that by calling the script as a Custom Success Action (form > edit > overrides), I don't get the problem. So I think that by calling the script as custom script adapter I was trying to change the user's permission while they were still engaged with the form and that is impossible, even with the Manager proxy role.
I hope that helps. And if anyone has a more precise description of the problem, that would be appreciated.
For granting and revoking the permissions to submit a form, you could:
Create a group (e.g. with the ID "Submitters") and assign the chosen users to it
Make sure the form-folder has the state 'private' and grant View-permissions via the sharing-tab of the form-folder to the group
Add a content-item of type 'Page' in the form-folder's parent (e.g. with the ID 'submitted') and set its state to 'public'
Add a content-item of type 'Custom Script Adapter', select 'Manager' in the field 'Proxy role', and insert the lines below into the field 'Script body':
# Remove current user of group and redirect to [FORM_PARENT_URL]/landing_page_id'.
# If user is not in group, fail silently and continue.
# Fail if landing_page_id does not exist in form-folder, or one of its parents.
#
# Assumes a page with the ID as declared in `landing_page_id` lives in the
# form-folder's parent (or one of its grand-parents, first found wins),
# and holds the state 'public', so users can view it regardless of their
# group-memberships.
#
# Ment to be used after submission of a PloneFormGen-form with private-state and
# a locally assigned Reader-role for the group, so only group-members can view and
# submit the form.
from Products.CMFCore.utils import getToolByName
group_id = 'Submitters' # change as needed
landing_page_id = 'submitted' # change as needed
portal_groups = getToolByName(ploneformgen, 'portal_groups')
user_id = ploneformgen.memberId()
parent_url = '/'.join(ploneformgen.absolute_url().split('/')[:-1])
redirect_to_url = parent_url + '/' + landing_page_id
# Revoke current user's group-membership:
portal_groups.removePrincipalFromGroup(user_id, group_id)
# Let user land in userland:
request.response.redirect(redirect_to_url)
Tested with Plone-4.3.11 and Products.PloneFormGen-1.7.25

Determining Exact Reason for Facebook Error Code 100

I am experimenting with facebook and trying to create an event, via the Graph API. I am using django and the python-facebook-sdk from github. I can successfully post to my wall pull friends etc.
I am using django-social-auth for facebook login stuff and have settings.py for permissions:
FACEBOOK_EXTENDED_PERMISSIONS = ['publish_stream','create_event','rsvp_event']
In the graph api explorer on facebook my request works so I know what parameters to use and, well, I am using them.
Here is my python code:
def new_event(self):
event = {}
event['name'] = name
event['privacy'] = 'OPEN'
event['start_time'] = '2011-11-04T14:42Z'
event['end_time'] = '2011-11-05T14:46Z'
self.graph.put_object("me", "events", args=None, post_args=event)
The code that is calling the facebook api is roughly: (also the access_token is added to the post_args which then is converted to post_data and urlencoded.
file = urllib.urlopen("https://graph.facebook.com/me/events?" +
urllib.urlencode(args), post_data)
The error I am getting is:
Exception Value: (#100) Invalid parameter
I am trying to figure out what is wrong, but am also curios of how to figure out overall what is wrong so I can debug this in the future. it seems to be too generic of an error because I don't know what is wrong.
Not really sure how post_args works but this call did the trick
graph.put_object("me","events",start_time="2013-11-04T14:42Z", privacy="OPEN", end_time="2013-11-05T14:46Z", name="Test Event")
The invalid parameter most likely is pointing to how you are feeding the parameters as post_args. I don't think the SDK was ever designed to feed it like this. I could be mistaken as I'm not really sure what post_args would be doing.
Another way based on how put_object is setup with **data it would be
graph.put_object("me","events", **event)

Categories