Amazon Product Advertising API Signature - python

I am trying to produce a signature for the Amazon Product Advertising API, been at it a few hours and am still getting a 403 - could anyone have a quick look at the code and tell me if I am doing anything wrong please?
This is the function I use to create the signature
def create_signature(service, operation, version, search_index, keywords, associate_tag, time_stamp, access_key):
start_string = "GET\n" + \
"webservices.amazon.com\n" + \
"/onca/xml\n" + \
"AWSAccessKeyId=" + access_key + \
"&AssociateTag=" + associate_tag + \
"&Keywords=" + keywords + \
"&Operation=" + operation + \
"&SearchIndex=" + search_index + \
"&Service=" + service + \
"&Timestamp=" + time_stamp + \
"&Version=" + version
dig = hmac.new("MYSECRETID", msg=start_string, digestmod=hashlib.sha256).digest()
sig = urllib.quote_plus(base64.b64encode(dig).decode())
return sig;
And this is the function I use to return the string for the request
def ProcessRequest(request_item):
start_string = "http://webservices.amazon.com/onca/xml?" + \
"AWSAccessKeyId=" + request_item.access_key + \
"&AssociateTag=" + request_item.associate_tag + \
"&Keywords=" + request_item.keywords + \
"&Operation=" + request_item.operation + \
"&SearchIndex=" + request_item.search_index + \
"&Service=" + request_item.service + \
"&Timestamp=" + request_item.time_stamp + \
"&Version=" + request_item.version + \
"&Signature=" + request_item.signature
return start_string;
And this is the run code
_AWSAccessKeyID = "MY KEY"
_AWSSecretKey= "MY SECRET KEY"
def ProduceTimeStamp():
time = datetime.datetime.now().isoformat()
return time;
item = Class_Request.setup_request("AWSECommerceService", "ItemSearch", "2011-08-01", "Books", "harry%20potter", "PutYourAssociateTagHere", ProduceTimeStamp(), _AWSAccessKeyID)
item2 = Class_Request.ProcessRequest(item)
An example web request it spits out that produces at 403 is this:-
http://webservices.amazon.com/onca/xml?AWSAccessKeyId=AKIAIY4QS5QNDAI2NFLA&AssociateTag=PutYourAssociateTagHere&Keywords=harry%20potter&Operation=ItemSearch&SearchIndex=Books&Service=AWSECommerceService&Timestamp=2015-02-26T23:53:14.330000&Version=2011-08-01&Signature=KpC%2BUsyJcw563LzIgxf7GkYI5IV6EfmC0%2FsH8LuP%2FEk%3D
There is also a holder class called ClassRequest that just has a field for every request field
The instructions I followed are here if anyone is intrested:- http://docs.aws.amazon.com/AWSECommerceService/latest/DG/rest-signature.html
I hope someone can help, I am new to Python and a bit lost

You can simply use one of the existing solutions
bottlenose
python-amazon-product-api
python-amazon-simple-product-api
available from PyPI.
OR
Compare your solution to one of those:
https://bitbucket.org/basti/python-amazon-product-api/src/41529579819c75ff4f03bc93ea4f35137716ebf2/amazonproduct/api.py?at=default#cl-143
Your timestamp, for instance, looks a bit short.

Check again that the timestamp is right, it should have the format of 2015-03-27T15:10:17.000Z and in your example web request it looks like: 2015-02-26T23:53:14.330000
A good tool to try out your links is Amazon's signed requests helper: https://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html

That Worked for me.
$str = "Service=AWSECommerceService&Operation=ItemSearch&AWSAccessKeyId={Access Key}&Keywords=Harry%20Potter&ResponseGroup=Images%2CItemAttributes%2COffers&SearchIndex=Books&Timestamp=2019-08-11T17%3A51%3A56.000Z";
$ar = explode("&", $str);
natsort($ar);
$str = "GET
webservices.amazon.com
/onca/xml
";
$str .= implode("&", $ar);
$str = urlencode(base64_encode(hash_hmac("sha256",$str,'{Secret Key Here}',true)));
http://webservices.amazon.com/onca/xml?Service=AWSECommerceService&Operation=ItemSearch&AWSAccessKeyId={Access Key}&Keywords=Harry%20Potter&ResponseGroup=Images%2CItemAttributes%2COffers&SearchIndex=Books&Timestamp=2019-08-11T17%3A51%3A56.000Z&Signature=$str
Remember: If you get this error
Your AccessKey Id is not registered for Product Advertising API. Please use the AccessKey Id obtained after registering at https://affiliate-program.amazon.com/assoc_credentials/home
Go to https://affiliate-program.amazon.com/assoc_credentials/home

Related

insert array into mongodb using pymongo

I am trying to add array into mongdb using pymongo
I have another program that will return something like
['1 aksdfjas;dkfjsa;dfkj','2 ;alksdjf;askdjf;asdfjkasdf', '3 ;alksdfj;asdlkfj;asdfj']
and I want to add them into the insert.
1)I cannot think of any other ways to do it so I am converting them to string and concatenate and trying to add them to the post(there must be better way no?)
2)When I do this, instead of desire affect, I get
["'1 aksdfjas;dkfjsa;dfkj','2 ;alksdjf;askdjf;asdfjkasdf', '3 ;alksdfj;asdlkfj;asdfj'",]
That extra quotes.. how can I correct this?
import pymongo
import time
import datetime
from random import *
from pymongo import MongoClient
client = MongoClient('mongodb://user:abc123#10.0.0.1:27017')
stringToStuff = 'blabh blah blahhhhh'
def createLoop():
return randint(5,15)
def tGenerator(e):
returnString = ''
for i in range(e):
returnString += "'" + str(i+1) + " " + stringToStuff + "',"
return returnString
db = client['pytest']
collection = db['test']
names = db.test.find()
collection2 = db['pytestResult']
for p in names:
print(p['name'])
name2 = p['name']
#post = {"name":name2,"score":8,"date":datetime.datetime.now()}
post = {
"name":name2,
"score":8,
"date":datetime.datetime.now(),
#”output”: ['1 aksdfjas;dkfjsa;dfkj','2 ;alksdjf;askdjf;asdfjkasdf', '3 ;alksdfj;asdlkfj;asdfj',]
“output”: [tGenerator(createLoop())]
}
collection2.insert_one(post)
First, change how you are constructing the string from tGenerator method to below:
returnString += str(i+1) + " " + stringToStuff + ","
Second, you can use the split method to do the required, so your insertion will look something like below:
post = {
"name":name2,
"score":8,
"date":datetime.datetime.now(),
"output": tGenerator(createLoop()).split(',')
}
collection2.insert_one(post)
I hope, the above works for you.

api depends doesnt work with widget='mail_thread'

I have this problem,I have a field that filter the messages when you click on Send message on the social network, but my problem is that i have to refresh the page to get the data. I wanna get it at the same time I send the message.
#api.multi
#api.depends('message_ids')
def _compute_defect_summary_attachment_ids(self):
body = ' '
attachments = []
cont = 1
for rec in self:
for msj in rec.message_ids:
if msj.message_type == 'comment' and msj.subtype_id.name == 'Debates':
soup = BeautifulSoup(msj.body)
body += u'- Observación ' + str(cont) + ': ' + soup.text + '\n' \
+ '- Reportante: ' + msj.create_uid.name + '\n' \
+ '- Fecha: ' + msj.date + '\n\n'
cont += 1
rec.update({
'defect_summary': body})
Trying fixing it, I saw when I add the widget doesn't work. Any idea? I need to use the also the widget.
odoo 10 : Inherit mail.thread object in your class and also you can add message_ids in view.you can use in any module or class.
1) models/custom.py
class CustomDetails(models.Model):
_name = 'custom.details'
_inherit = ['mail.thread', 'ir.needaction_mixin']
2) views/custom_view.xml
<field name="message_ids" widget="mail_thread"/>

Get more than 50 members of a group from Soundcloud

I'm trying to get all the members of a SoundCloud group using the Python API.
So far I can get the first 50 but providing the mentioned "linked_partitioning=1" argument doesn't seem to be move on to the next set of members.
My code is:
# Login and authenticate
client = soundcloud.Client(client_id = clientId,
client_secret = clientSecret,
username = username,
password = password)
# print authenticated user's username
print client.get('/me').username
# Get members
count = 1
total = 0;
while count > 0 and total < max:
members = client.get('/resolve', url=group_url, linked_partitioning=1)
count = len(members)
total += count
# Debug Output
print "Total: " + str(total) + ". Retrieved another " + str(count) + " members."
print members[0].username
I've been looking at: https://developers.soundcloud.com/docs/api/guide#pagination but still haven't managed to find a solution.
Snippet of working PHP code using linked_partioning and limit (max value can be 200). The default result set size is 50.
I use the limit with all the endpoints, but have not as yet touched groups, so I can't verify that it works there.
$qryString = self::API_ENDPOINT . self::SEARCH_API_ARTISTS
. "/" . $userId ."/tracks?" . $this->getClientId(true);
$qryString .= "&limit=" . self::MAX_API_PAGE_SIZE;
$qryString .= "&linked_partitioning=1";

Reduce RAM usage in Python script

I've written a quick little program to scrape book data off of a UNESCO website which contains information about book translations. The code is doing what I want it to, but by the time it's processed about 20 countries, it's using ~6GB of RAM. Since there are around 200 I need to process, this isn't going to work for me.
I'm not sure where all the RAM usage is coming from, so I'm not sure how to reduce it. I'm assuming that it's the dictionary that's holding all the book information, but I'm not positive. I'm not sure if I should simply make the program run once for each country, rather than processing the lot of them? Or if there's a better way to do it?
This is the first time I've written anything like this, and I'm a pretty novice, self-taught programmer, so please point out any significant flaws in the code, or improvement tips you have that may not directly relate to the question at hand.
This is my code, thanks in advance for any assistance.
from __future__ import print_function
import urllib2, os
from bs4 import BeautifulSoup, SoupStrainer
''' Set list of countries and their code for niceness in explaining what
is actually going on as the program runs. '''
countries = {"AFG":"Afghanistan","ALA":"Aland Islands","DZA":"Algeria"}
'''List of country codes since dictionaries aren't sorted in any
way, this makes processing easier to deal with if it fails at
some point, mid run.'''
country_code_list = ["AFG","ALA","DZA"]
base_url = "http://www.unesco.org/xtrans/bsresult.aspx?lg=0&c="
destination_directory = "/Users/robbie/Test/"
only_restable = SoupStrainer(class_="restable")
class Book(object):
def set_author(self,book):
'''Parse the webpage to find author names. Finds last name, then
first name of original author(s) and sets the Book object's
Author attribute to the resulting string.'''
authors = ""
author_last_names = book.find_all('span',class_="sn_auth_name")
author_first_names = book.find_all('span', attrs={\
'class':"sn_auth_first_name"})
if author_last_names == []: self.Author = [" "]
for author in author_last_names:
try:
first_name = author_first_names.pop()
authors = authors + author.getText() + ', ' + \
first_name.getText()
except IndexError:
authors = authors + (author.getText())
self.author = authors
def set_quality(self,book):
''' Check to see if book page is using Quality, then set it if
so.'''
quality = book.find_all('span', class_="sn_auth_quality")
if len(quality) == 0: self.quality = " "
else: self.quality = quality[0].contents[0]
def set_target_title(self,book):
target_title = book.find_all('span', class_="sn_target_title")
if len(target_title) == 0: self.target_title = " "
else: self.target_title = target_title[0].contents[0]
def set_target_language(self,book):
target_language = book.find_all('span', class_="sn_target_lang")
if len(target_language) == 0: self.target_language = " "
else: self.target_language = target_language[0].contents[0]
def set_translator_name(self,book) :
translators = ""
translator_last_names = book.find_all('span', class_="sn_transl_name")
translator_first_names = book.find_all('span', \
class_="sn_transl_first_name")
if translator_first_names == [] and translator_last_names == [] :
self.translators = " "
return None
for translator in translator_last_names:
try:
first_name = translator_first_names.pop()
translators = translators + \
(translator.getText() + ',' \
+ first_name.getText())
except IndexError:
translators = translators + \
(translator.getText())
self.translators = translators
def set_published_city(self,book) :
published_city = book.find_all('span', class_="place")
if len(published_city) == 0:
self.published_city = " "
else: self.published_city = published_city[0].contents[0]
def set_publisher(self,book) :
publisher = book.find_all('span', class_="place")
if len(publisher) == 0:
self.publisher = " "
else: self.publisher = publisher[0].contents[0]
def set_published_country(self,book) :
published_country = book.find_all('span', \
class_="sn_country")
if len(published_country) == 0:
self.published_country = " "
else: self.published_country = published_country[0].contents[0]
def set_year(self,book) :
year = book.find_all('span', class_="sn_year")
if len(year) == 0:
self.year = " "
else: self.year = year[0].contents[0]
def set_pages(self,book) :
pages = book.find_all('span', class_="sn_pagination")
if len(pages) == 0:
self.pages = " "
else: self.pages = pages[0].contents[0]
def set_edition(self, book) :
edition = book.find_all('span', class_="sn_editionstat")
if len(edition) == 0:
self.edition = " "
else: self.edition = edition[0].contents[0]
def set_original_title(self,book) :
original_title = book.find_all('span', class_="sn_orig_title")
if len(original_title) == 0:
self.original_title = " "
else: self.original_title = original_title[0].contents[0]
def set_original_language(self,book) :
languages = ''
original_languages = book.find_all('span', \
class_="sn_orig_lang")
for language in original_languages:
languages = languages + language.getText() + ', '
self.original_languages = languages
def export(self, country):
''' Function to allow us to easilly pull the text from the
contents of the Book object's attributes and write them to the
country in which the book was published's CSV file.'''
file_name = os.path.join(destination_directory + country + ".csv")
with open(file_name, "a") as by_country_csv:
print(self.author.encode('UTF-8') + " & " + \
self.quality.encode('UTF-8') + " & " + \
self.target_title.encode('UTF-8') + " & " + \
self.target_language.encode('UTF-8') + " & " + \
self.translators.encode('UTF-8') + " & " + \
self.published_city.encode('UTF-8') + " & " + \
self.publisher.encode('UTF-8') + " & " + \
self.published_country.encode('UTF-8') + " & " + \
self.year.encode('UTF-8') + " & " + \
self.pages.encode('UTF-8') + " & " + \
self.edition.encode('UTF-8') + " & " + \
self.original_title.encode('UTF-8') + " & " + \
self.original_languages.encode('UTF-8'), file=by_country_csv)
by_country_csv.close()
def __init__(self, book, country):
''' Initialize the Book object by feeding it the HTML for its
row'''
self.set_author(book)
self.set_quality(book)
self.set_target_title(book)
self.set_target_language(book)
self.set_translator_name(book)
self.set_published_city(book)
self.set_publisher(book)
self.set_published_country(book)
self.set_year(book)
self.set_pages(book)
self.set_edition(book)
self.set_original_title(book)
self.set_original_language(book)
def get_all_pages(country,base_url):
''' Create a list of URLs to be crawled by adding the ISO_3166-1_alpha-3
country code to the URL and then iterating through the results every 10
pages. Returns a string.'''
base_page = urllib2.urlopen(base_url+country)
page = BeautifulSoup(base_page, parse_only=only_restable)
result_number = page.find_all('td',class_="res1",limit=1)
if not result_number:
return 0
str_result_number = str(result_number[0].getText())
results_total = int(str_result_number.split('/')[1])
page.decompose()
return results_total
def build_list(country_code_list, countries):
''' Build the list of all the books, and return a list of Book objects
in case you want to do something with them in something else, ever.'''
for country in country_code_list:
print("Processing %s now..." % countries[country])
results_total = get_all_pages(country, base_url)
for url in range(results_total):
if url % 10 == 0 :
all_books = []
target_page = urllib2.urlopen(base_url + country \
+"&fr="+str(url))
page = BeautifulSoup(target_page, parse_only=only_restable)
books = page.find_all('td',class_="res2")
for book in books:
all_books.append(Book (book,country))
page.decompose()
for title in all_books:
title.export(country)
return
if __name__ == "__main__":
build_list(country_code_list,countries)
print("Completed.")
I guess I'll just list off some of the problems or possible improvements in no particular order:
Follow PEP 8.
Right now, you've got lots of variables and functions named using camel-case like setAuthor. That's not the conventional style for Python; Python would typically named that set_author (and published_country rather than PublishedCountry, etc.). You can even change the names of some of the things you're calling: for one, BeautifulSoup supports findAll for compatibility, but find_all is recommended.
Besides naming, PEP 8 also specifies a few other things; for example, you'd want to rewrite this:
if len(resultNumber) == 0 : return 0
as this:
if len(result_number) == 0:
return 0
or even taking into account the fact that empty lists are falsy:
if not result_number:
return 0
Pass a SoupStrainer to BeautifulSoup.
The information you're looking for is probably in only part of the document; you don't need to parse the whole thing into a tree. Pass a SoupStrainer as the parse_only argument to BeautifulSoup. This should reduce memory usage by discarding unnecessary parts early.
decompose the soup when you're done with it.
Python primarily uses reference counting, so removing all circular references (as decompose does) should let its primary mechanism for garbage collection, reference counting, free up a lot of memory. Python also has a semi-traditional garbage collector to deal with circular references, but reference counting is much faster.
Don't make Book.__init__ write things to disk.
In most cases, I wouldn't expect just creating an instance of a class to write something to disk. Remove the call to export; let the user call export if they want it to be put on the disk.
Stop holding on to so much data in memory.
You're accumulating all this data into a dictionary just to export it afterwards. The obvious thing to do to reduce memory is to dump it to disk as soon as possible. Your comment indicates that you're putting it in a dictionary to be flexible; but that doesn't mean you have to collect it all in a list: use a generator, yielding items as you scrape them. Then the user can iterate over it just like a list:
for book in scrape_books():
book.export()
…but with the advantage that at most one book will be kept in memory at a time.
Use the functions in os.path rather than munging paths yourself.
Your code right now is rather fragile when it comes to path names. If I accidentally removed the trailing slash from destinationDirectory, something unintended happens. Using os.path.join prevents that from happening and deals with cross-platform differences:
>>> os.path.join("/Users/robbie/Test/", "USA")
'/Users/robbie/Test/USA'
>>> os.path.join("/Users/robbie/Test", "USA") # still works!
'/Users/robbie/Test/USA'
>>> # or say we were on Windows:
>>> os.path.join(r"C:\Documents and Settings\robbie\Test", "USA")
'C:\\Documents and Settings\\robbie\\Test\\USA'
Abbreviate attrs={"class":...} to class_=....
BeautifulSoup 4.1.2 introduces searching with class_, which removes the need for the verbose attrs={"class":...}.
I imagine there are even more things you can change, but that's quite a few to start with.
What do you want the booklist for, in the end? You should export each book at the end of the "for url in range" block (inside it), and do without the allbooks dict. If you really need a list, define exactly what infos you will need, not keeping full Book objects.

Python to ruby conversion

Hi guys I have a python script that posts some data to google and gets back response. The script is below
net, cid, lac = 24005, 40242, 62211
import urllib
a = '000E00000000000000000000000000001B0000000000000000000000030000'
b = hex(cid)[2:].zfill(8) + hex(lac)[2:].zfill(8)
c = hex(divmod(net,100)[1])[2:].zfill(8) + hex(divmod(net,100)[0])[2:].zfill(8)
string = (a + b + c + 'FFFFFFFF00000000').decode('hex')
try:
data = urllib.urlopen('http://www.google.com/glm/mmap',string)
r = data.read().encode('hex')
print r
except:
print 'connect error'
I want to get the same response with a ruby script. I am not able to form the request properly and I always get the badimplementation error or http 501 error. Could you tell me where the mistake is at? (The ruby script is attached below).
require 'net/http'
def fact(mnc,mcc,cid,lac)
a = '000E00000000000000000000000000001B0000000000000000000000030000'
b = cid.to_s(16).rjust(8,'0') + lac.to_s(16).rjust(8,'0')
c = mnc.to_s(16).rjust(8,'0') + mcc.to_s(16).rjust(8,'0')
string = [a + b + c + 'FFFFFFFF00000000'].pack('H*')
url = URI.parse('http://www.google.com/glm/mmap')
resp = Net::HTTP.post_form(url,string)
print resp
end
puts fact(5,240,40242,62211)
From the documentation:
Posts HTML form data to the specified URI object. The form data must be provided as a Hash mapping from String to String.
You have to pass the parameters, if I understood that correctly, on the form:
{"param1" => "value1", "param2"=>"value2"}
I just didn't understand what are the names of the parameters you are passing on your request.
Here are some usage examples for the method Net::HTTP::post_form, also from the official doc:
Ex 1:
uri = URI('http://www.example.com/search.cgi')
res = Net::HTTP.post_form(uri, 'q' => 'ruby', 'max' => '50')
puts res.body
Ex2:
uri = URI('http://www.example.com/search.cgi')
res = Net::HTTP.post_form(uri, 'q' => ['ruby', 'perl'], 'max' => '50')
puts res.body
Link to the examples
Hope it helps
edit: function that accepts a String as a parameter to the post request: Net::HTTP::request_post

Categories