view_func.__module__ suddenly broken - python

I have a process_view middleware that sets the module and view names into the request like this:
class ViewName( object ):
def process_view( self, request, view_func, view_args, view_kwargs ):
request.module_name = view_func.__module__
request.view_name = view_func.__name__
I use the names together as a key for session based pagination.
But as of yesterday, for reasons I am unable to discover,
view_func.__module__ now returns 'cp.models', which is a model file in
one of my apps.
I've gone back one commit at a time trying to find the cause. The issue is still there even after reverting code to more than a month ago.
I'm seeing only two python packages changed recently on the server, my app uses neither, and the updates were more than a month ago:
cat /var/log/dpkg.log*|grep "upgrade" |grep python
2013-01-25 03:41:05 upgrade python-problem-report 2.0.1-0ubuntu15.12.0.1-0ubuntu17.1
2013-01-25 03:41:06 upgrade python-apport 2.0.1-0ubuntu15.1 2.0.1-0ubuntu17.1
I also tried rearranging my middleware list, didn't help.
Any idea what else might be causing this issue?

Related

Python AppEngine MapReduce

i have created a pretty simple MapReduce pipeline, but i am having a cryptic:
PipelineSetupError: Error starting production.cron.pipelines.ItemsInfoPipeline(*(), **{})#a741186284ed4fb8a4cd06e38921beff:
when i try to start it. This is the pipeline code:
class ItemsInfoPipeline(base_handler.PipelineBase):
"""
"""
def run(self):
output = yield mapreduce_pipeline.MapreducePipeline(
job_name="items_job",
mapper_spec="production.cron.mappers.items_info_mapper",
input_reader_spec="mapreduce.input_readers.DatastoreInputReader",
mapper_params={
"input_reader": {
"entity_kind": "production.models.Transaction"
}
}
)
yield ItemsInfoStorePipeline(output)
class ItemsInfoStorePipeline(base_handler.PipelineBase):
"""
"""
def run(self, statistics):
print statistics
return "OK"
Of course i have double checked that the mapper path is right, and take into account that ItemsInfoStorePipeline is not doing anything because i am focusing the have the pipeline started, which is not happening.
It is all triggered by a Flask view, the following:
class ItemsInfoMRJob(views.MethodView):
"""
It's based on transacions.
"""
def get(self):
"""
:return:
"""
pipeline = ItemsInfoPipeline()
pipeline.start()
redirect_url = "%s/status?root=%s" % (pipeline.base_path, pipeline.pipeline_id)
return flask.redirect(redirect_url)
I am using GoogleAppEngineMapReduce==1.9.22.0
Thanks for any help.
UPDATE
The above code works once deployed.
UPDATE 2
Apparently there's more people dealing with this:
https://github.com/GoogleCloudPlatform/appengine-mapreduce/issues/103
I'm updating this. I have a code base that's using pipelines and works fine in OSX. I had another developer using OSX that simply nothing I do seems to get it working, he gets this:
Encountered unexpected error from ProtoRPC method implementation: PipelineSetupError
I've tried swapping versions around and making our PC's match perfectly and it continue to happen. I finally broke down and built an Ubuntu image in docker. I'm also doing my best to match our versions of AppEngine and libraries perfectly.
It also refuses to start with the same message. I started working through the libraries uncommenting the part that's swallowing the error but that was a long rabbit hole I started down because lots of stuff above it also seems to be swallowing up whatever is going on.

win32com.client.Dispatch works, win32com.client.GetActiveObject doesnt

Im using python 2.7.9, windows 7...
The overall goal: Have another application access our custom com server (already running at this point) and send it a message to be displayed. Obviously, there needs to be a single server, multiple clients.
Im trying to use some custom code as a com server. The class was created as:
class StatusServerClass:
_public_methods_ = ...
_reg_progid_ = "CseStatusServerLib.CseStatusServer"
_reg_verprogid_ = "CseStatusServerLib.CseStatusServer"
_reg_progid_ = "CseStatusServerLib.CseStatusServer"
_reg_clsid_ = "{<GUID here>}"
_reg_desc_ = 'CSE Status Server'
_typelib_guid_ = "{<typelib GUID here>}"
_typelib_version_ = 1, 0 # Version 1.0
_reg_clsctx_ = pythoncom.CLSCTX_LOCAL_SERVER
_reg_threading_ = "Apartment" # Not used?
def __init__.....
and registered using:
win32com.server.register.UseCommandLine(StatusServerClass)
I can see it in regedit and as far as i know, it looks ok.
The GUID is right, name is right.
Now when i go to use it, this works just fine:
self.StatusClient = Dispatch('CseStatusServerLib.CseStatusServer')
but when i want to attach to a running instance from another exe (or even another python window for debug) using:
win32com.client.GetActiveObject("CseStatusServerLib.CseStatusServer")
it just gives me:
dispatch = pythoncom.GetActiveObject(resultCLSID)
com_error: (-2147221021, 'Operation unavailable', None, None)
Tells me that its not registered?
Ive tried using the GUID, Ive tried using pythoncom.GetObject with both the ID and the GUID... no luck.
Ive tried comtypes package and get the same thing.
Any ideas on what im doing wrong? Why does Dispatch find it by name, but GetActiveObject gets mad?
Seems that Dispatch working by name would suggest that the registering worked?
What else can i verify in the regedit?
thanks!!!
UPDATED 6/6/2016
In case you haven't realized yet, I know very little about this. But I have read that for win32com.client.GetActiveObject() to work, the server needs to be in the "running object table"... and its not.
So, I found some more example code that i used to register the class this way:
import win32com.server.util
wrapped = win32com.server.util.wrap(StatusServerClass)
flags = pythoncom.REGCLS_MULTIPLEUSE|pythoncom.REGCLS_SUSPENDED
handle = pythoncom.RegisterActiveObject(wrapped,
StatusServerClass._reg_clsid_,flags)
and that does allow the server to show in the running object table, and i can get this:
testSW = win32com.client.GetActiveObject("CseStatusServerLib.CseStatusServer")
to return without error.
So now, I can use Dispatch or GetActiveObject just fine in python, pythonWin, and even interact with the server in Excel/VB <-> python and it appears to share namespsace.
BUT...
I still cant get this IE-based third party app to use the existing server... even with GetActiveObject. Always wants to create new instance and use its own namespace... not good
Is there something with IE or Chrome that would prevent the existing server from being used? Again, it works fine in Excel/VB. The application is supposed to execute "python myscript.py" (which works fine in idle, pythonwin and cmdline), but doesnt execute the com server stuff when called from IE/Chrome App (although other python functions like file writing work just fine).
Also, seeing as how i know very little about this, by all means, suggest a better way of doing this: starting a server in python as a singleton and then accessing it from another application.
thanks!!

How to Limit Ratings (or anything else in Django) by IP?

I am using DjangoRatings for a web app which allows anonymous ratings from registered as well as nonregistered users. After I set the IPLimit integer in the DjangoRatings settings.py file, everything works fine; however, when I exceed the number of votes that I have allowed per IP, the entire web page gets reloaded with an “RaiseIPLimit()” error and the entire site goes down which necessitates reloading the previous page via back button. My question is, what can I add to my views.py file to tell django that when DjangoRatings passes the RaiseIPLimit() error, simply print something like “You can only vote once!” message to the user and leave the loaded web page as it is instead of crashing the entire site.
If there’s an easier way to do this general IP checking besides DjangoRatings, I am open to implementing other ways, but DjangoRatings just seems much easier than anything else since the only thing I need IP limits on is rating stuff. To be more clear, here is the exact error that DjangoRatings gives me:
IPLimitReached at /myapp/rating /page1
And this is straight from the DjangoRatings source code:
num_votes = Vote.objects.filter(
content_type=kwargs['content_type'],
object_id=kwargs['object_id'],
key=kwargs['key'],
ip_address=ip_address,
).count()
if num_votes >= getattr(settings, 'RATINGS_VOTES_PER_IP', RATINGS_VOTES_PER_IP):
raise IPLimitReached() ...
kwargs.update(defaults)
if use_cookies:
# record with specified cookie was not found ...
cookie = defaults['cookie'] # ... thus we need to replace old cookie (if presented) with new one
kwargs.pop('cookie__isnull', '') # ... and remove 'cookie__isnull' (if presented) from .create()'s **kwargs
rating, created = Vote.objects.create(**kwargs), True

Getting "If-Match or If-None-Match header or entry etag attribute required" errors when batch deleting contacts

I'm using the gdata Python library to do batched deletes of contacts, and I just get the "If-Match or If-None-Match header or entry etag attribute required" error.
I think the problem started when I had to enable the Contacts API in the console (which until a few days ago wasn't required? *).
EDIT:
It's actually failing for both updating and deleting operations. Batched insert works fine.
Tried specifying the If-Match header, but it's still failing:
custom_headers = atom.client.CustomHeaders(**{'If-Match': '*'})
request_feed = gdata.contacts.data.ContactsFeed()
request_feed.AddDelete(entry=contact, batch_id_string='delete')
response_feed = self.gd_client.ExecuteBatch(
request_feed,
'https://www.google.com/m8/feeds/contacts/default/full/batch',
custom_headers=custom_headers
)
Also created a ticket on the project page, but I doubt it will get any attention there.
EDIT 2:
Using the Batch method with force=True (which just adds the If-Match: * header) is the same result.
response_feed = self.gd_client.Batch(
request_feed,
uri='https://www.google.com/m8/feeds/contacts/default/full/batch',
force=True
)
* Can someone verify this? I never had to enable it in the console before and my app was able to use the Contacts API without problem, and I believe it wasn't even available before. I was surprised to see it yesterday.
Copying answer from the Google code ticket.
Basically, you need to patch the client's Post method to modify the request feed slightly. Here's one way to do it without directly modifying the library source:
def patched_post(client, entry, uri, auth_token=None, converter=None, desired_class=None, **kwargs):
if converter is None and desired_class is None:
desired_class = entry.__class__
http_request = atom.http_core.HttpRequest()
entry_string = entry.to_string(gdata.client.get_xml_version(client.api_version))
entry_string = entry_string.replace('ns1', 'gd') # where the magic happens
http_request.add_body_part(
entry_string,
'application/atom+xml')
return client.request(method='POST', uri=uri, auth_token=auth_token,
http_request=http_request, converter=converter,
desired_class=desired_class, **kwargs)
# when it comes time to do a batched delete/update,
# instead of calling client.ExecuteBatch, instead directly call patched_post
patched_post(client_instance, entry_feed, 'https://www.google.com/m8/feeds/contacts/default/full/batch')
The ticket referenced in the original post has some updated information and a temporary work around that allows batch deletes to succeed. So far it's working for me!
http://code.google.com/p/gdata-python-client/issues/detail?id=700
You can also specify the etag attribute to get around it. This works in the batch request payload:
<entry gd:etag="*" >
<batch:id>delete</batch:id>
<batch:operation type="delete"/>
<id> urlAsId </id>
</entry>

Using mx:RemoteObject with web2py's #service.amfrpc decorator

I am using web2py (v1.63) and Flex 3. web2py v1.61 introduced the #service decorators, which allow you to tag a controller function with #service.amfrpc. You can then call that function remotely using http://..../app/default/call/amfrpc/[function]. See http://www.web2py.com/examples/default/tools#services. Does anybody have an example of how you would set up a Flex 3 to call a function like this? Here is what I have tried so far:
<mx:RemoteObject id="myRemote" destination="amfrpc" source="amfrpc"
endpoint="http://{mysite}/{myapp}/default/call/amfrpc/">
<mx:method name="getContacts"
result="show_results(event)"
fault="on_fault(event)" />
</mx:RemoteObject>
In my scenario, what should be the value of the destination and source attributes? I have read a couple of articles on non-web2py implementations, such as http://corlan.org/2008/10/10/flex-and-php-remoting-with-amfphp/, but they use a .../gateway.php file instead of having a URI that maps directly to the function.
Alternatively, I have been able to use flash.net.NetConnection to successfully call my remote function, but most of the documentation I have found considers this to be the old, pre-Flex 3 way of doing AMF. See http://pyamf.org/wiki/HelloWorld/Flex. Here is the NetConnection code:
gateway = new NetConnection();
gateway.connect("http://{mysite}/{myapp}/default/call/amfrpc/");
resp = new Responder(show_results, on_fault);
gateway.call("getContacts", resp);
-Rob
I have not found a way to use a RemoteObject with the #service.amfrpc decorator. However, I can use the older ActionScript code using a NetConnection (similar to what I posted originally) and pair that with a #service.amfrpc function on the web2py side. This seems to work fine. The one thing that you would want to change in the NetConnection code I shared originally, is adding an event listener for connection status. You can add more listeners if you feel the need, but I found that NetStatusEvent was a must. This status will be fired if the server is not responding. You connection set up would look like:
gateway = new NetConnection();
gateway.addEventListener(NetStatusEvent.NET_STATUS, gateway_status);
gateway.connect("http://127.0.0.1:8000/robs_amf/default/call/amfrpc/");
resp = new Responder(show_results, on_fault);
gateway.call("getContacts", resp);
-Rob

Categories