FEDEX API tracker using Python - python

I have been working on a FedEx tracking API program in Python. I have what I think to be the right code and I have already registered and gained access through FedEx to use the API keys and so on. I am having difficulty finding what packages/libraries to install and then import to be able to run my program. Here is what I have.
from fedex.config import FedexConfig
CONFIG_OBJ = FedexConfig(key='<key>',
password='<pass>',
account_number='<acct_no>',
meter_number='<meter_no>')
from fedex.services.track_service import FedexTrackRequest
track = FedexTrackRequest(CONFIG_OBJ)
tracking_num = '781820562774'
track.SelectionDetails.PackageIdentifier.Type =
'TRACKING_NUMBER_OR_DOORTAG'
track.SelectionDetails.PackageIdentifier.Value = tracking_num
track.SelectionDetails.PackageIdentifier.CarrierCode = 'FDXG' #G
for ground FDXE for express
track.send_request()
print(track.response)

to use API's you should use the requests and json libraries
if you don't have the requests library, simply use pip install requests
here's a simple example:
import requests
import json
fedex = requests.get ("URL")
fedex_api = fedex.json()

Related

adding fake_useragent to people_also_ask module

I want to scrape google 'people also ask questions/answer'. I am doing it successfully with the following module.
pip install people_also_ask
The problem is the library is configured such that no one can send many requests to google. I want to send 1000 requests per day and to achieve that I have to add fake_useragent to module. I tried a lot but when I try to add fake user agent to header it gives error. I am not a pro so I must have done wrong myself. Can anyone help me add fake_useragent to module(people_also_ask). here is working code to get question/answer.
from encodings import utf_8
import people_also_ask as paa
from fake_useragent import UserAgent
ua = UserAgent()
while True:
input("Please make sure the queries are in \\query.txt file.\npress Enter to continue...")
try:
query_file = open("query.txt","r")
queries = query_file.readlines()
query_file.close()
break
except:
print("Error with the query.txt file...")
for query in queries:
res_file = open("result.csv","a",encoding="utf_8")
try:
query = query.replace("\n","")
except:
pass
print(f'Searching for "{query}"')
questions = paa.get_related_questions(query, 14)
questions.insert(0,query)
print("\n________________________\n")
main_q = True
for i in questions:
i = i.split('?')[0]
try:
answer = str(paa.get_answer(i)['response'])
if answer[-1].isdigit():
answer = answer[:-11]
print(f"Question:{i}?")
except Exception as e:
print(e)
print(f"Answer:{answer}")
if main_q:
a = ""
b = ""
main_q = False
else:
a = "<h2>"
b = "</h2>"
res_file.writelines(str(f'{a}{i}?{b},"<p>{answer}</p>",'))
print("______________________")
print("______________________")
res_file.writelines("\n")
res_file.close()
print("\nSearch Complete.")
input("Press any key to Exit!")
This is against Google's terms of service, and the wishes of the people_also_ask package. This answer is for educational purposes only.
You asked why fake_useragent is prevented from working. It's not prevented from working, but the people_also_ask package simply isn't implementing any calls to make use of any fake_useragent methods. You can't just import a package and expect another package to start using it. You manually have to make packages work together.
To do that, you have to have some idea of how the 2 packages work. Have a look at the source code and you will see you can make them work together very easily. Just substitute the constant header in people_also_ask with one generated by fake_useragent before you request any data.
paa.google.HEADERS = {'User-Agent': ua.random} # replace the HEADER with a randomised HEADER from fake_useragent
questions = paa.get_related_questions(query, 14)
and
paa.google.HEADERS = {'User-Agent': ua.random} # replace the HEADER with a randomised HEADER from fake_useragent
answer = str(paa.get_answer(i)['response'])
NOTE:
Not all user agents will work. Google doesn't give related questions depending on the user agent. It is not the fault of either the fake_useragent, or the people_also_ask package.
In order to alleviate this issue somewhat, make sure you call ua.update() and you can also use PR #122 of fake_useragents to only select a subset of the newest user agents which are more likely to work, though you will still get a few missed queries. There is a reason the people_also_ask package didn't bypass or work-around this limitation from google

Can we use folium maps to get some kind of site type or site description?

I am playing around with some code to loop through records with longitude and latitude coordinates, and get some kind of site type, or site classification, or whatever you want to call it. This sample code below won't work, but I think it's kind of close.
import folium
import requests
from xml.etree import ElementTree
from folium import plugins
m = folium.Map(location=[40.7368436,-74.1524242], zoom_start=10)
for lat,lon in zip(df_cellgroups['latitude'], df_cellgroups['latitude']):
marker = folium.Marker(location=[lat,lon], tooltip = tip, popup = name)
marker.add_to(m)
m
Essentially I want to grab names like 'Red Bull Arena', 'Upstate University Hospital', 'San Francisco International Airport', etc., etc., etc. So, is it possible to get descriptions of sites, based on lat and lon coordinates, using folium maps? Maybe it's known as a tooltip or popup, not sure. Thanks.
You can retrieve information about a location by using a reversed geocoding service/provider such as offered by OpenStreetMap, Google or Esri.
(There is an overview here of all providers supported by the Python's geocoder package.)
Below an example using the geocoder package and OpenStreetMap (Nominatim) as a provider:
# pip install geocoder requests
import time
import requests
import geocoder
locations = (
(40.7368436, -74.1524242),
(44.6371650, -63.5917312),
(47.2233913, 8.817269106),
)
with requests.Session() as session:
for location in locations:
g = geocoder.osm(
location=location,
method="reverse",
lang_code="en",
session=session,
headers={
"User-Agent": "Stackoverflow Question 69578280"
},
)
print(g.osm) # or print(g.json)
# slow down loop in order to comply with the Nominatim's Usage Policy:
# https://operations.osmfoundation.org/policies/nominatim
time.sleep(1)
Alternatively, there are other Python libraries such as the ArcGIS for Python API or GeoPy. Here is an example using the geopy package using also OpenStreetMap (Nominatim) as a provider:
# pip install geopy
from geopy.geocoders import Nominatim
from geopy.extra.rate_limiter import RateLimiter
locations = (
(40.7368436, -74.1524242),
(44.6371650, -63.5917312),
(47.2233913, 8.817269106),
)
locator = Nominatim(user_agent="Stackoverflow Question 69578280")
# using RateLimiter to comply with Nominatim's Usage Policy
reverse = RateLimiter(locator.reverse, min_delay_seconds=1)
for location in locations:
result = reverse(location, language="en")
print(result.raw)
Note: Make sure that you read the term of use of the service you are going to use. Nominatim's Usage Policy can be found here: https://operations.osmfoundation.org/policies/nominatim

Github repo results when using github3.py library

I'm looking through some of the user documentation for github3.py library.
I'm trying to list all of a user's repos.
If I use the code below, with gr = gh.repos.list().all(), I get the expected results.
But, if I use gr = gh.repos.list(user='username',type='all'), I get this error: <pygithub3.core.result.smart.Result object at 0x00000000033728D0>
Looking at the docs, this should work, but I'm new to Python and this library so I may be missing something??
#!/usr/bin/env python
from pygithub3 import Github
import requests
auth = dict(login="xxxx", user = "xxxx", token="xxxxx", repo="my-repo")
gh = Github(**auth)
gr = gh.repos.list().all()
print gr
Try this way:
from pygithub3 import Github
auth = dict(login="my-github-login", password="my-github-password")
g = Github(**auth)
print g.repos.list(user='user-whose-repos-I-want-to-get').all()

PYthon 3.3 - api bufferapp. Cant update

i want to publish an update on bufferapp.com with python.
I installed the python buffer modul: https://github.com/bufferapp/buffer-python
I managed to connect to their api with python3.
But if i use the update.publish() command it gives me the following error:
nameerror: "update" is not defined.
Im using this code:
from pprint import pprint as pp
from colorama import Fore
from buffpy.models.update import Update
from buffpy.managers.profiles import Profiles
from buffpy.managers.updates import Updates
from buffpy.api import API
token = 'awesome_token'
api = API(client_id='client_id',
client_secret='client_secret',
access_token=token)
# publish now
update.publish()
What am i doing wrong?
Thank you for your help.

Python script for "Google search by image"

I have checked Google Search API's and it seems that they have not released any API for searching "Images". So, I was wondering if there exists a python script/library through which I can automate the "search by image feature".
This was annoying enough to figure out that I thought I'd throw a comment on the first python-related stackoverflow result for "script google image search". The most annoying part of all this is setting up your proper application and custom search engine (CSE) in Google's web UI, but once you have your api key and CSE, define them in your environment and do something like:
#!/usr/bin/env python
# save top 10 google image search results to current directory
# https://developers.google.com/custom-search/json-api/v1/using_rest
import requests
import os
import sys
import re
import shutil
url = 'https://www.googleapis.com/customsearch/v1?key={}&cx={}&searchType=image&q={}'
apiKey = os.environ['GOOGLE_IMAGE_APIKEY']
cx = os.environ['GOOGLE_CSE_ID']
q = sys.argv[1]
i = 1
for result in requests.get(url.format(apiKey, cx, q)).json()['items']:
link = result['link']
image = requests.get(link, stream=True)
if image.status_code == 200:
m = re.search(r'[^\.]+$', link)
filename = './{}-{}.{}'.format(q, i, m.group())
with open(filename, 'wb') as f:
image.raw.decode_content = True
shutil.copyfileobj(image.raw, f)
i += 1
There is no API available but you are can parse the page and imitate the browser, but I don't know how much data you need to parse because google may limit or block access.
You can imitate the browser by simply using urllib and setting correct headers, but if you think parsing complex web-pages may be difficult from python, you can directly use a headless browser like phontomjs, inside a browser it is trivial to get correct elements using javascript/DOM
Note before trying all this check google's TOS
You can try this:
https://developers.google.com/image-search/v1/jsondevguide#json_snippets_python
It's deprecated, but seems to work.

Categories