Determining Exact Reason for Facebook Error Code 100 - python

I am experimenting with facebook and trying to create an event, via the Graph API. I am using django and the python-facebook-sdk from github. I can successfully post to my wall pull friends etc.
I am using django-social-auth for facebook login stuff and have settings.py for permissions:
FACEBOOK_EXTENDED_PERMISSIONS = ['publish_stream','create_event','rsvp_event']
In the graph api explorer on facebook my request works so I know what parameters to use and, well, I am using them.
Here is my python code:
def new_event(self):
event = {}
event['name'] = name
event['privacy'] = 'OPEN'
event['start_time'] = '2011-11-04T14:42Z'
event['end_time'] = '2011-11-05T14:46Z'
self.graph.put_object("me", "events", args=None, post_args=event)
The code that is calling the facebook api is roughly: (also the access_token is added to the post_args which then is converted to post_data and urlencoded.
file = urllib.urlopen("https://graph.facebook.com/me/events?" +
urllib.urlencode(args), post_data)
The error I am getting is:
Exception Value: (#100) Invalid parameter
I am trying to figure out what is wrong, but am also curios of how to figure out overall what is wrong so I can debug this in the future. it seems to be too generic of an error because I don't know what is wrong.

Not really sure how post_args works but this call did the trick
graph.put_object("me","events",start_time="2013-11-04T14:42Z", privacy="OPEN", end_time="2013-11-05T14:46Z", name="Test Event")
The invalid parameter most likely is pointing to how you are feeding the parameters as post_args. I don't think the SDK was ever designed to feed it like this. I could be mistaken as I'm not really sure what post_args would be doing.
Another way based on how put_object is setup with **data it would be
graph.put_object("me","events", **event)

Related

Flyte 0.16.2: Error loading Blob - How to get Types.Blob.fetch() to work in task decorated function?

I have a Flyte task function like this:
#task
def do_stuff(framework_obj):
framework_obj.get_outputs() # This calls Types.Blob.fetch(some_uri)
Trying to load a blob URI using flytekit.sdk.types.Types.Blob.fetch, but getting this error:
ERROR:flytekit: Exception when executing No temporary file system is present. Either call this method from within the context of a task or surround with a 'with LocalTestFileSystem():' block. Or specify a path when calling this function. Note: Cleanup is not automatic when a path is specified.
I can confirm I can load blobs using with LocalTestFileSystem(), in tests, but when actually trying to run a workflow, I'm not sure why I'm getting this error, as the function that calls blob-processing is decorated with #task so it's definitely a Flyte Task. I also confirmed that the task node exists on the Flyte web console.
What path is the error referencing and how do I call this function appropriately?
Using Flyte Version 0.16.2
Could you please give a bit more information about the code? This is flytekit version 0.15.x? I'm a bit confused since that version shouldn't have the #task decorator. It should only have #python_task which is an older API. If you want to use the new python native typing API you should install flytekit==0.17.0 instead.
Also, could you point to the documentation you're looking at? We've updated the docs a fair amount recently, maybe there's some confusion around that. These are the examples worth looking at. There's also two new Python classes, FlyteFile and FlyteDirectory that have replaced the Blob class in flytekit (though that remains what the IDL type is called).
(would've left this as a comment but I don't have the reputation to yet.)
Some code to help with fetching outputs and reading from a file output
#task
def task_file_reader():
client = SynchronousFlyteClient("flyteadmin.flyte.svc.cluster.local:81", insecure=True)
exec_id = WorkflowExecutionIdentifier(
domain="development",
project="flytesnacks",
name="iaok0qy6k1",
)
data = client.get_execution_data(exec_id)
lit = data.full_outputs.literals["o0"]
ctx = FlyteContext.current_context()
ff = TypeEngine.to_python_value(ctx, lv=lit,
expected_python_type=FlyteFile)
with open(ff, 'rb') as fh:
print(fh.readlines())

Getting all my public posts using Facebook Graph API

How can i get all my facebook posts using python code and facebook graph api.
i have tried using this code:
import json
import facebook
def get_basic_info(token):
graph = facebook.GraphAPI(token)
profile = graph.get_object('me',fields='first_name,last_name,location,link,email')
print(json.dumps(profile, indent=5))
def get_all_posts(token):
graph = facebook.GraphAPI(token)
events = graph.request('type=event&limit=10000')
print(events)
def main():
token = "my_token"
#get_basic_info(token)
get_all_posts(token)
if __name__ == '__main__':
main()
I am getting a error that says,
"GraphAPIError: (#33) This object does not exist or does not support this action".
Seems like all the other stackoverflow questions are very old and does not apply for the newest version of facebook graph API. I am not entirely sure whether u can do this using facebook graph api or not.
if this is not possible using this technique, is there any other way i can get my posts using python?
please note that function get_basic_info() is working perfectly.
I assume you want to get user events: https://developers.facebook.com/docs/graph-api/reference/user/events/
Be aware:
This edge is only available to a limited number of approved apps. Unapproved apps querying this edge will receive an empty data set in response. You cannot request access to this edge at this time.
Either way, the API would not be type=event&limit=10000 but /me/events instead.
I have solved this problem with the help of the first answer by #luschn
I was making one more mistake, that is using events for getting all my posts.instead i should have used me/posts in my code.
Here is the function that works perfectly in version 6.
def get_all_posts(graph):
posts = graph.request('/me/posts')
count=1
while "paging" in posts:
print("length of the dictionary",len(posts))
print("length of the data part",len(posts['data']))
for post in posts["data"]:
print(count,"\n")
if "message" in post: #because some posts may not have a caption
print(post["message"])
print("time : ",post["created_time"])
print("id :",post["id"],"\n\n")
count=count+1
posts=requests.get(posts["paging"]["next"]).json()
print("end of posts")
Here, the post["data"] only gives first 25 posts, so I have used posts["paging"]["next"] link to get the next page as long as there is the next page.

Getting "If-Match or If-None-Match header or entry etag attribute required" errors when batch deleting contacts

I'm using the gdata Python library to do batched deletes of contacts, and I just get the "If-Match or If-None-Match header or entry etag attribute required" error.
I think the problem started when I had to enable the Contacts API in the console (which until a few days ago wasn't required? *).
EDIT:
It's actually failing for both updating and deleting operations. Batched insert works fine.
Tried specifying the If-Match header, but it's still failing:
custom_headers = atom.client.CustomHeaders(**{'If-Match': '*'})
request_feed = gdata.contacts.data.ContactsFeed()
request_feed.AddDelete(entry=contact, batch_id_string='delete')
response_feed = self.gd_client.ExecuteBatch(
request_feed,
'https://www.google.com/m8/feeds/contacts/default/full/batch',
custom_headers=custom_headers
)
Also created a ticket on the project page, but I doubt it will get any attention there.
EDIT 2:
Using the Batch method with force=True (which just adds the If-Match: * header) is the same result.
response_feed = self.gd_client.Batch(
request_feed,
uri='https://www.google.com/m8/feeds/contacts/default/full/batch',
force=True
)
* Can someone verify this? I never had to enable it in the console before and my app was able to use the Contacts API without problem, and I believe it wasn't even available before. I was surprised to see it yesterday.
Copying answer from the Google code ticket.
Basically, you need to patch the client's Post method to modify the request feed slightly. Here's one way to do it without directly modifying the library source:
def patched_post(client, entry, uri, auth_token=None, converter=None, desired_class=None, **kwargs):
if converter is None and desired_class is None:
desired_class = entry.__class__
http_request = atom.http_core.HttpRequest()
entry_string = entry.to_string(gdata.client.get_xml_version(client.api_version))
entry_string = entry_string.replace('ns1', 'gd') # where the magic happens
http_request.add_body_part(
entry_string,
'application/atom+xml')
return client.request(method='POST', uri=uri, auth_token=auth_token,
http_request=http_request, converter=converter,
desired_class=desired_class, **kwargs)
# when it comes time to do a batched delete/update,
# instead of calling client.ExecuteBatch, instead directly call patched_post
patched_post(client_instance, entry_feed, 'https://www.google.com/m8/feeds/contacts/default/full/batch')
The ticket referenced in the original post has some updated information and a temporary work around that allows batch deletes to succeed. So far it's working for me!
http://code.google.com/p/gdata-python-client/issues/detail?id=700
You can also specify the etag attribute to get around it. This works in the batch request payload:
<entry gd:etag="*" >
<batch:id>delete</batch:id>
<batch:operation type="delete"/>
<id> urlAsId </id>
</entry>

Backbone.js HTTP PUT requests fail with a 404 error when sent to a Pyramid/Cornice app

I'm using Pyramid with Cornice to create an API for a Backbone.js application to consume. My current code is working perfectly for GET and POST requests, but it is returning 404 errors when it receives PUT requests. I believe that this is because Backbone sends them as http://example.com/api/clients/ID, where ID is the id number of the object in question.
My Cornice setup code is:
clients = Service(name='clients', path='/api/clients', description="Clients")
#clients.get()
def get_clients(request):
...
#clients.post()
def create_client(request):
...
#clients.put()
def update_client(request):
...
It seems that Cornice only registers the path /api/clients and not /api/clients/{id}. How can I make it match both?
The documentation gives an example of a service that has both an individual path (/users/{id}) and an object path (/users). Would this work for you ?
#resource(collection_path='/users', path='/users/{id}')
A quick glance at the code for the resource decorator shows that it mainly creates two Service : one for the object and one for the collection. Your problem can probably be solved by adding another Service :
client = Service(name='client', path='/api/clients/{id}', description="Client")

Using mx:RemoteObject with web2py's #service.amfrpc decorator

I am using web2py (v1.63) and Flex 3. web2py v1.61 introduced the #service decorators, which allow you to tag a controller function with #service.amfrpc. You can then call that function remotely using http://..../app/default/call/amfrpc/[function]. See http://www.web2py.com/examples/default/tools#services. Does anybody have an example of how you would set up a Flex 3 to call a function like this? Here is what I have tried so far:
<mx:RemoteObject id="myRemote" destination="amfrpc" source="amfrpc"
endpoint="http://{mysite}/{myapp}/default/call/amfrpc/">
<mx:method name="getContacts"
result="show_results(event)"
fault="on_fault(event)" />
</mx:RemoteObject>
In my scenario, what should be the value of the destination and source attributes? I have read a couple of articles on non-web2py implementations, such as http://corlan.org/2008/10/10/flex-and-php-remoting-with-amfphp/, but they use a .../gateway.php file instead of having a URI that maps directly to the function.
Alternatively, I have been able to use flash.net.NetConnection to successfully call my remote function, but most of the documentation I have found considers this to be the old, pre-Flex 3 way of doing AMF. See http://pyamf.org/wiki/HelloWorld/Flex. Here is the NetConnection code:
gateway = new NetConnection();
gateway.connect("http://{mysite}/{myapp}/default/call/amfrpc/");
resp = new Responder(show_results, on_fault);
gateway.call("getContacts", resp);
-Rob
I have not found a way to use a RemoteObject with the #service.amfrpc decorator. However, I can use the older ActionScript code using a NetConnection (similar to what I posted originally) and pair that with a #service.amfrpc function on the web2py side. This seems to work fine. The one thing that you would want to change in the NetConnection code I shared originally, is adding an event listener for connection status. You can add more listeners if you feel the need, but I found that NetStatusEvent was a must. This status will be fired if the server is not responding. You connection set up would look like:
gateway = new NetConnection();
gateway.addEventListener(NetStatusEvent.NET_STATUS, gateway_status);
gateway.connect("http://127.0.0.1:8000/robs_amf/default/call/amfrpc/");
resp = new Responder(show_results, on_fault);
gateway.call("getContacts", resp);
-Rob

Categories