it seems that amazon has changed their API, i get error from Python:
id = "..."
pas = "..."
produit = amazon.API(id, pas, "fr")
produit.item_search("playstation")
and i get this error:
AWSError: AWS.MissingParameters: Your request is missing required
parameters. Required parameters include AssociateTag.
and i've tried the example in the documentation and it's the same:
produit.item_search('Books', Publisher='Galileo Press')
AWSError: AWS.MissingParameters: Your request is missing required
parameters. Required parameters include AssociateTag.
i've found this:
Changing the example to:
api = API(AWS_KEY, SECRET_KEY, 'de',ASSOC_TAG)
from here:
https://bitbucket.org/basti/python-amazon-product-api/issue/33/required-parameters-include-associatetag
any ideas? or the documentation should be updated?
They dropped support for obsolete APIs recently, and the newest version requires a valid Associate Tag.
https://affiliate-program.amazon.com/gp/advertising/api/detail/api-changes.html
Associate Tag Parameter: Every request made to the API should include a valid Associate Tag. Any request that does not contain a valid Associate Tag will be rejected with an appropriate error message.
ASSOC_TAG must be your real tag (one that matches the API key).
Related
I am using confluence API (using python)to update an existing confluence page but I am facing the below error:
atlassian.errors.ApiValueError: No space or no content type, or setup a wrong version type set to content, or status param is not draft and status content is current
Strange thing is I was able to update the page before may be 10 times but its suddenly throwing an error. maybe i am missing something?
Can anyone please suggest what is missing. I am using below snippet:
confluence = Confluence(url=confluence_url,username=userid,password=password)
status = confluence.update_page(page_id, title, pagecontent)
pprint(status)
remove character \n,
need to use a list of strings
I see the same error because my pagecontent contains <DC> in it and in HTML < and > need to be escaped.
Seems the third param of update_page require a valid html string.
You can use it escape method in python if you are using python 3.4+
from html import escape
from atlassian import Confluence
confluence = Confluence(url=confluence_url,username=userid,password=password)
status = confluence.update_page(page_id, title, escape(pagecontent))
pprint(status)
I've just discovered something strange. When downloading data from facebook with GET using the requests 2.18.4 library, I get error when I just use
requests.get('https://.../{}/likes?acces_token={}'.format(userID,token))
into which I parse the user ID and access - the API does not read the access token correctly.
But, it works fine as
requests.get('https://../{}'.format(userID), params={"access_token":token})
Or it works when I copy paste the values in the appropriate fields by hand in the python console.
So my hypothesis is that it has something to with how the token string got parsed using the params vs the string. But what I don't understand at all, why would that be the case? Or is ? character somehow strange in this case?
Double check if both the URLs are the same (in your post they differ by the /likes substring).
Then you can check how the library requests concatenated parameters from the params argument:
url = 'https://facebook.com/.../{}'.format(userID)
r = requests.Request('GET', url, params={"access_token":token})
pr = r.prepare()
print pr.url
I've registered at http://www.developers.elsevier.com/action/devprojects. I created a project and got my scopus key:
Now, using this generated key, I would like to find an author by firstname, lastname and subjectarea. I make requests from my university network, which is allowed to visit Scopus (I have full manual access to Scopus search, use it from Firefox with no problem). However, I wanted to automatize my Scopus mining, by writing a simple script. I would like to find publications of an author by giving his/her firstname, lastname and subjectarea.
Here's my code:
# !/usr/bin/env python
# -*- coding: utf-8 -*-
import requests
import json
from scopus import SCOPUS_API_KEY
scopus_author_search_url = 'http://api.elsevier.com/content/search/author?'
headers = {'Accept':'application/json', 'X-ELS-APIKey': SCOPUS_API_KEY}
search_query = 'query=AUTHFIRST(%) AND AUTHLASTNAME(%s) AND SUBJAREA(%s)' % ('John', 'Kitchin', 'COMP')
# api_resource = "http://api.elsevier.com/content/search/author?apiKey=%s&" % (SCOPUS_API_KEY)
# request with first searching page
page_request = requests.get(scopus_author_search_url + search_query, headers=headers)
print page_request.url
# response to json
page = json.loads(page_request.content.decode("utf-8"))
print page
Where SCOPUS_API_KEY looks just like this: SCOPUS_API_KEY="xxxxxxxx".
Although I have full access to scopus from my university network, I'm getting such response:
{u'service-error': {u'status': {u'statusText': u'Requestor
configuration settings insufficient for access to this resource.',
u'statusCode': u'AUTHENTICATION_ERROR'}}}
The generated link looks like this: http://api.elsevier.com/content/search/author?query=AUTHFIRST(John)%20AND%20AUTHLASTNAME(Kitchin)%20AND%20SUBJAREA(COMP) and when I click it, it shows an XML file:
<service-error><status>
<statusCode>AUTHORIZATION_ERROR</statusCode>
<statusText>No APIKey provided for request</statusText>
</status></service-error>
Or, when I change the scopus_author_search_url to "http://api.elsevier.com/content/search/author?apiKey=%s&" % (SCOPUS_API_KEY) I'm getting:
{u'service-error': {u'status': {u'statusText': u'Requestor configuration settings insufficient for access to this resource.', u'statusCode': u'AUTHENTICATION_ERROR'}}} and the XML file:
<service-error>
<status>
<statusCode>AUTHENTICATION_ERROR</statusCode>
<statusText>Requestor configuration settings insufficient for access to this resource.</statusText>
</status>
</service-error>
What can be the cause of this problem and how can I fix it?
I have just registered for an API key and tested it first with this URL:
http://api.elsevier.com/content/search/author?apikey=4xxxxxxxxxxxxxxxxxxxxxxxxxxxxx43&query=AUTHFIRST%28John%29+AND+AUTHLASTNAME%28Kitchin%29+AND+SUBJAREA%28COMP%29
This works fine from my university network. I also tested a second API Key, so have verified one with registered website on my university domain, one with registered website http://apitest.example.com, ruling out the domain name used to register as the source of your problem.
I tested this
in the browser,
using your python code both with the api key in the headers. The only change I made to your code is removing
from scopus import SCOPUS_API_KEY
and adding
SCOPUS_API_KEY ='4xxxxxxxxxxxxxxxxxxxxxxxxxxxxx43'
using your python code adapted to put the apikey in the URL instead of the headers.
In all cases, the query returns two authors, one at Carnegie Mellon and one at Palo Alto.
I can't replicate your error message. If I try to use the API key from an IP address unregistered with elsevier (e.g. my home computer), I see a different error:
<service-error>
<status>
<statusCode>AUTHENTICATION_ERROR</statusCode>
<statusText>Client IP Address: xxx.yyy.aaa.bbb does not resolve to an account</statusText>
</status>
</service-error>
If I use a random (wrong) API key from the university network, I see
<service-error>
<status>
<statusCode>AUTHORIZATION_ERROR</statusCode>
<statusText>APIKey <mad3upa1phanum3r1ck3y> with IP address <my.uni.IP.add> is unrecognized or has insufficient privileges for access to this resource</statusText>
</status>
</service-error>
Debug steps
As I can't replicate your problem - here are some diagnostic steps you can use to resolve:
Use your browser at uni to actually submit the api query with your key in the URL (i.e. copy the URL above, paste it into the address bar, substitute your key and see whether you get the XML back)
If 1 returns the XML you expect, move onto submitting the request via Python - first, copy the exact URL straight into Python (no variable substitution via %s, no apikey in the header) and simply do a .get() on it.
If 2 returns correctly, ensure that your SCOPUS_API_KEY holds the exact key value, no more no less. i.e. print 'SCOPUS_API_KEY' should return your apikey: 4xxxxxxxxxxxxxxxxxxxxxxxxxxxxx43
If 1 returns the error, it looks like your uni (for whatever reason) has not got access to the authors query API. This doesn't make much sense given that you can perform manual search, but that is all I can conclude
Docs
For reference the authentication algorithm documentation is here, but it is not very simple to follow. You are following authentication option 1 and your method should just work.
N.B. The API is limited to 5000 author retrievals per week. If you have run a lot of queries in a loop, even if they have failed, it is possible that you have exceeded that...
For future reference. OP was using the package scopus which has long been renamed to pybliometrics.
Nowadays you can do
from pybliometrics.scopus import AuthorSearch
q = "AUTHFIRST(John) AND AUTHLASTNAME(Kitchin) AND SUBJAREA(COMP)"
s = AuthorSearch(q) # handles access, retrieval, parsing and even caches results
print(s)
results = s.authors # Holds all the information as a list of namedtuples
print(results) # You can put this into a pandas DataFrame as well
I recently, started working on the google app engine and am facing the following problem:
I have a main.py where my user sees his own comments + those of others. Now, I need to add an EditComment.py where a user is directed when he wants to edit his code.
I am working with the guestbook application only, and to actually fetch the selected comment I need both guestbook name and the content of the comment. How do I create this url?
In other words, I need to create a url like
\edit?guestbook="Family"&content="helloworld"
I tried this
//I need to send guestbook_name and content of greeting in order to fetch the row from
//the database
//So, I show the text of the greeting and give a url to edit page
content_toSend = {'guestbook_name':guestbook_name,'content':greeting.content}
self.response.write('<blockquote>%s</blockquote>' %
(content_toSend,greeting.content))
//But the other side handler receives only the first variable of the dict in the get request
so that the user can click on a greeting and be directed to the edit page. But the get request just sends the first var(guestbook_name) in the url. How do I send the whole dictionary?
Edit : I had tired urllib.urlencode but the handler in webapp2 requires a dict and so that didn't work
The method urlencode() of urllib standard library can be useful.
edit with example:
content_toSend = urllib.urlencode({
'guestbook_name' : guestbook_name,
'content' : greeting.content
})
If you know that you are going to have these two variables in dictionary why dont u try this
self.response.write('<blockquote>%s</blockquote>' %
(content_toSend['guestbook_name'],content_toSend['content'],greeting.content))
DetailPageURL's returned by ItemSearch seem to include an incorrect ID/tag rather than the associate ID I requested the search with.
I'm getting:
http://www.amazon.co.uk/gp/product/1590595009?SubscriptionId=XXX&tag=foo-12&linkCode=as2&camp=1634&creative=19450&creativeASIN=1590595009
When I expect:
http://www.amazon.co.uk/gp/product/1590595009?SubscriptionId=XXX&tag=wwwmydomain-12&linkCode=as2&camp=1634&creative=19450&creativeASIN=1590595009
How do I get the correct tag? (Note that SO rewrites the above links to their own Associate ID if you click either of the above)
I'm using Python and PyAWS 0.3.0, although I think the problem is with my request, rather than with the API wrapper.
(As an aside, The Amazon Associates Link Checker (U.K. store)/U.S. store is invaluable in testing these links)
Simple error in the end..... I was including the tag in the initial search:
for searchResult in
ecs.ItemSearch(item,
SearchIndex=index,
AssociateTag='wwwmydomain-12')
But not in the secondary loop that steps through each result getting more details:
for item in
ecs.ItemSearch(searchResult.ASIN,
ResponseGroup='Medium'):
should be:
for item in
ecs.ItemSearch(searchResult.ASIN,
ResponseGroup='Medium',
AssociateTag='wwwodbodycom-21'):
The tag is needed in both - it seems it's not carried over.