Where I can get payer_id for executing payment (paypal)? - python

Working with PayPal Python REST SDK. After payment created we need to execute this payment. At https://github.com/paypal/PayPal-Python-SDK/blob/master/samples/payment/execute.py is code sample. But when executing payment
if payment.execute({"payer_id": "DUFRQ8GWYMJXC"}): # return True or False
print("Payment[%s] execute successfully" % (payment.id))
else:
print(payment.error)
we need to write payer_id. But how I can take it? Any ideas or code examples?

The payerid is returned in the host headers on the callback - Alternatively, you can do the following..
In the process of creating a sale, do...
saleinfo_ = {"intent":"sale",
"redirect_urls":{
"return_url":("myurlonsuccess"),
"cancel_url":("myurlonfail")
},
"payer":{"payment_method":"paypal"},
"transactions":[
{"amount":{"total":out_totaltopay_, "details":{"tax":out_taxes_, "subtotal":out_subtotal_}, "currency":symbol_}, "description":description_}
payment_ = Payment(saleinfo_)
if (payment_.create()):
token_ = payment_.id
Then when the return callback arrives, use...
payment_ = Payment.find(token_)
if (payment_ != None):
payerid_ = payment_.payer.payer_info.payer_id
if (payment_.execute({"payer_id":payerid_})):
....
The json data received in the find process is similar to the following
{'payment_method': 'paypal', 'status': 'VERIFIED', 'payer_info': {'shipping_address': {'line1': '1 Main St', 'recipient_name': 'Test Buyer', 'country_code': 'US', 'state': 'CA', 'postal_code': '95131', 'city': 'San Jose'}, 'first_name': 'Test', 'payer_id': '<<SOMEID>>', 'country_code': 'US', 'email': 'testbuyer#mydomain.com', 'last_name': 'Buyer'}}
Hope that helps

Related

How do I read a yaml file into a Jupyter notebook?

I have a file from an Open API Spec that I have been trying to access in a Jupyter notebook. It is a .yaml file. I was able to upload it into Jupyter and put it in the same folder as the notebook I'd like to use to access it. I am new to Jupyter and Python, so I'm sorry if this is a basic question. I found a forum that suggested this code to read the data (in my file: "openapi.yaml"):
import yaml
with open("openapi.yaml", 'r') as stream:
try:
print(yaml.safe_load(stream))
except yaml.YAMLError as exc:
print(exc)
This seems to bring the data in, but it is a completely unstructured stream like so:
{'openapi': '3.0.0', 'info': {'title': 'XY Tracking API', 'version': '2.0', 'contact': {'name': 'Narrativa', 'url': 'http://link, 'email': '}, 'description': 'The XY Tracking Project collects information from different data sources to provide comprehensive data for the XYs, X-Y. Contact Support:'}, 'servers': [{'url': 'link'}], 'paths': {'/api': {'get': {'summary': 'Data by date range', 'tags': [], 'responses': {'200': {'description': 'OK', 'content': {'application/json': {'schema': {'$ref': '#/components/schemas/covidtata'}}}}}, 'operationId': 'get-api', 'parameters': [{'schema': {'type': 'string', 'format': 'date'}, 'in': 'query', 'name': 'date_from', 'description': 'Date range beginig (YYYY-DD-MM)', 'required': True}, {'schema': {'type': 'string', 'format': 'date'}, 'in': 'query', 'name': 'date_to', 'description': 'Date range ending (YYYY-DD-MM)'}], 'description': 'Returns the data for a specific date range.'}}, '/api/{date}': {'parameters': [{'schema': {'type': 'string', 'format': 'date'}, 'name': 'date', 'in': 'path', 'required': True}], 'get': {'summary': 'Data by date', 'tags': [], 'responses': {'200': {'description': 'OK', 'content': {'application/json': {'schema': {'$ref': '#/components/schemas/data'}}}}}, 'operationId': 'get-api-date', 'description': 'Returns the data for a specific day.'}}, '/api/country/{country}': {'parameters': [{'schema': {'type': 'string', 'example': 'spain'}, 'name': 'country', 'in': 'path', 'required': True, 'example': 'spain'}, {'schema': {'type': 'strin
...etc.
I'd like to work through the data for analysis but can't seem to access it correctly. Any help would be extremely appreciated!!! Thank you so much for reading.
What you're seeing in the output is JSON. This is in a machine-readable format which doesn't need human-readable newlines or indentation. You should be able to work with this data just fine in your code.
Alternatively, you may want to consider another parser/emitter such as ruamel.yaml which can make dealing with YAML files considerably easier than the package you're currently importing. Print statements with this package can preserve lines and indentation for better readability.

Pandas read the chat log log json to data frame?

How to converting the multiple list to data frame. below list contains the details about cloud containers want to extract the information like name , language , description and workspace id.
{'workspaces': [{'name': 'A_SupportAgent_dev',
'language': 'en',
'metadata': {'api_version': {'major_version': 'v1',
'minor_version': '2019-02-28'},
'digressions': True},
'description': 'Credit Card Banking Support Agent to assist with Sales And Service, created by Oliver Ivanoski and Steve Green',
'workspace_id': '',
'learning_opt_out': False},
{'name': 'Neatnik Watson Assistant Webhook Demo Skill',
'language': 'en',
'metadata': {'api_version': {'major_version': 'v1',
'minor_version': '2019-02-28'}},
'webhooks': [{'url': 'https://neatnik.net/watson/assistant/webhook/',
'name': 'main_webhook',
'headers': []}],
'description': '',
'workspace_id': '',
'system_settings': {'tooling': {'store_generic_responses': True},
'system_entities': {'enabled': True},
'spelling_auto_correct': True},
'learning_opt_out': False}]
'pagination': {'refresh_url': '/v1/workspaces?version=2019-02-28'}}
Want to convert the above list below data frame
Tried
pd.DataFrame(list(Workspace_List.items()) ,columns=['workspaces', 'pagination'])
columns = list(Workspace_List.keys())
values = list(Workspace_List.values())
arr_len = len(values)
You need to specify columns as you have another dictionary. So i think following below code will help u to organize your desire output
key = ['name','language','description','workspace_id']
output = pd.DataFrame(columns = key)
for i in range(len(df['workspaces'])):
ll = df['workspaces'][i]
output.loc[i] = [ll[x] for x in key]

Why get_profile_connections() is returning an empty array when using likedin-api

so I am working on a school project where i'm supposed to get my LinkedIn connections and do some text mining on them. I ran into an API project provided by tomquirk (github) and it was running properly but when i moved my project to another computer it stops working and now when I try to use get_profile_connections() it always returns me an empty array. I don't know what is the problem and I hope i can get some help here.
this is my code:
from linkedin_api import Linkedin
api = Linkedin('linkedintestapi2#gmail.com', '*******')
# GET a profile
profile = api.get_profile('mohamad-arune-a26930176')
# GET a profiles contact info
contact_info = api.get_profile_contact_info('mohamad-arune-a26930176')
# GET all connected profiles (1st, 2nd and 3rd degree) of a given profile
connections = api.get_profile_connections('ACoAACnIxL4B6Ff_-AnBiMDQaXXn2jAvma9NkXI')
print(profile)
print(contact_info)
print(connections)
and i get this as a result:
{'lastName': 'arune', 'locationName': 'Evansville, Indiana Area', 'student': False, 'elt': False, 'firstName': 'mohamad', 'entityUrn': 'urn:li:fs_profile:ACoAACnIxL4B6Ff_-AnBiMDQaXXn2jAvma9NkXI', 'location': {'basicLocation': {'countryCode': 'us', 'postalCode': '47521'}, 'preferredGeoPlace': 'urn:li:fs_region:(us,244)'}, 'headline': 'Student at azazzaz', 'profile_id': 'ACoAACnIxL4B6Ff_-AnBiMDQaXXn2jAvma9NkXI', 'experience': [], 'skills': [], 'education': [{'entityUrn': 'urn:li:fs_education:(ACoAACnIxL4B6Ff_-AnBiMDQaXXn2jAvma9NkXI,568801875)', 'timePeriod': {'endDate': {'year': 2009}, 'startDate': {'year': 2002}}, 'degreeName': 'azazazaz', 'schoolName': 'azazzaz', 'fieldOfStudy': 'azazzaz'}]}
{'email_address': 'linkedintestapi2#gmail.com', 'websites': [], 'phone_numbers': []}
[]

Mapbox Geocoder gives index error (internal server error)

I'm using mapbox geocoder for finding latitude and longitude from zip-codes.Problem is it sometimes work just fine and somtimes doesn't work at all.When it doesn't work,it return index error out of bounds,but in my terminal also shows internal server error. What can be done it this situation?
My code is bellow:
def get_context_data(self, **kwargs):
context = super(SingleCenterDetailView, self).get_context_data(**kwargs)
zip_code = self.object.center.zip_code
geocoder = Geocoder(access_token=mapbox_access_token)
response = geocoder.forward(str(zip_code))
response = response.geojson()['features'][0]
resp = response.get('center')
geo = [resp[0],resp[1]]
context['geo'] = geo
return context
And the problem is in line response = response.geojson()['features'][0] becouse when it works good,when I print this line it shows dict like this
{'text': '53-334', 'context': [{'text': 'Wrocław', 'wikidata': 'Q1799', 'id': 'place.8365052709251970'}, {'short_code': 'PL-DS', 'text': 'Dolnośląskie', 'wikidata': 'Q54150', 'id': 'region.25860'}, {'short_code': 'pl', 'text': 'Poland', 'wikidata': 'Q36', 'id': 'country.340'}], 'geometry': {'coordinates': [17.026519, 51.096412], 'type': 'Point'}, 'center': [17.026519, 51.096412], 'type': 'Feature', 'relevance': 1, 'bbox': [17.024936, 51.095263, 17.028411, 51.09752], 'place_name': '53-334, Wrocław, Dolnośląskie, Poland', 'place_type': ['postcode'], 'properties': {}, 'id': 'postcode.5334805015388220'}
when it doesn't work it prints nothing (thats this index error)
So my question is,can anything be done to solve this? I mean,if this sometimes work and sometimes does not then its probably mean that for some zip-codes it
cannot find latitude and longitude,but maybe is there some workaround? Or maybe problem is much more trivial and I simply did someting wrong? (I can even imagine it is some Django problem beocuse of this internal server error)
Ok so I have founded solution and it is much simpler than i thought.Sometimes zip code it isn't enough.So instead of zip code,you have to pass to geocoder.forward() entire address,like 'Szaserów Warszawa 04-141`
then it should works pretty well.

cherrypy.request.body.read() returning empty bytes object

I'm trying to implement a Paypal IPN listener using cherrypy. To verify an IPN, I have to send back the exact post data I received (order and all) back to PayPal with one parameter added. Since Python dicts are unordered, I can't just read in the post parameters in the normal way, so I'm trying to read in the raw post data with cherrypy.request.body.read(). It's returning an empty bytes object though, and I can't figure out why.
Here's the listener code right now:
class Server(object):
#cherrypy.expose
def ipn_listener(self, **kwargs):
print(kwargs)
print("CONTENT LENGTH")
cl = cherrypy.request.headers['Content-Length']
print(cl)
print("BODY")
body = cherrypy.request.body.read(int(cl))
print(body.decode('utf-8'))
print("BODY LENGTH")
print(len(body))
The output from those print statements is
{'residence_country': 'US', 'mc_handling': '2.06', 'mc_gross_1': '12.34', 'address_city': 'San Jose', 'verify_sign': 'AFcWxV21C7fd0v3bYYYRCpSSRl31AQCuZQmE68wbtffSmpqH2dQ4nr9n', 'txn_id': '993769469', 'mc_handling1': '1.67', 'tax': '2.02', 'first_name': 'John', 'address_street': '123 any street', 'address_country': 'United States', 'notify_version': '2.1', 'payer_status': 'verified', 'mc_fee': '0.44', 'mc_gross': '12.34', 'address_name': 'John Smith', 'payment_type': 'instant', 'item_number1': 'AK-1234', 'invoice': 'abc1234', 'payment_status': 'Completed', 'address_state': 'CA', 'business': 'seller#paypalsandbox.com', 'mc_shipping': '3.02', 'address_zip': '95131', 'payment_date': 'Sun Mar 26 2017 14:27:51 GMT-0400 (EDT)', 'item_name1': 'something', 'address_status': 'confirmed', 'address_country_code': 'US', 'mc_currency': 'USD', 'test_ipn': '1', 'custom': 'xyz123', 'txn_type': 'cart', 'payer_id': 'TESTBUYERID01', 'payer_email': 'buyer#paypalsandbox.com', 'last_name': 'Smith', 'receiver_email': 'seller#paypalsandbox.com', 'mc_shipping1': '1.02', 'receiver_id': 'seller#paypalsandbox.com'}
CONTENT LENGTH
904
BODY
BODY LENGTH
0
The keyword arguments aren't empty, so I know post data is going through. I can read in the expected length of the body data, but when i actually go to read it, I get nothing.
Any ideas would be appreciated. Thanks!

Categories