I use API:
https://www.blockchain.com/api/q
Trying to make a Get request:
url = 'https://www.blockchain.info/api/q/getreceivedbyaddress/' + strpod + '?confirmations=6'
zapros = requests.get(url)
But it returns the entire page.
And I only need the balance value.
Please help me.
import requests
address = "17LREmmnmTxCoFZ59wfhg4S639GsPqjTRT"
URL = "https://blockchain.info/q/getreceivedbyaddress/"+address+"?confirmations=6"
r = requests.get(url = URL)
# extracting balance ((in satoshi))
bt_balance = r.json()
The API link is not wrong. Please check with the blockchain addresss
Related
I am trying to access my account on MyFitnessPal in order to download my own food diaries. However, whenever I run the following code, I am consistently redirected to the login page. What am I missing? In the HTML code for the login page, I only see two input tags, one for "email" and one for "password", both of which I'm making sure to supply. I'm pretty new to web scraping, so any advice would be appreciated!
import requests
from bs4 import BeautifulSoup
# Save relevant urls
base_url = 'https://www.myfitnesspal.com'
login_action = '/account/login'
login_url = base_url + login_action
date = datetime.datetime(2022,3,13)
fmt_date = date.strftime('%Y-%m-%d')
food_url = base_url + '/reports/printable_diary/?from=' + fmt_date + '&to=' + fmt_date
headers = {'user-agent': {user agent}}
credentials = {'email': {email}, 'password': {password}}
s = requests.session()
login = s.post(login_url, headers = headers, data = credentials}
r = s.get(food_url, headers = headers)
soup = BeautifulSoup(r.text, 'html.parser')
print(soup.prettify())
What ends up getting printed is the HTML from the login page. (I have confirmed this by also printing the login page's HTML.)
Try this: r = requests.get('https://www.myfitnesspal.com/account/login', auth= ('email', 'password'))
I got a 200 response wit this.
I am trying to extract the followers of a random web page in Instagram. I tried to use python in combination with Beautiful Soup.
Nonetheless I have not received any information at web page where I could access
def get_user_info( user_name):
url = "https://www.instagram.com/" + user_name + "/?__a=1"
try:
r = requests.get(url)
except requests.exceptions.ConnectionError:
print ('Seems like dns lookup failed..')
time.sleep(60)
return None
if r.status_code != 200:
print ('User: ' + user_name + ' status code: ' + str(r.status_code))
print (r)
return None
info = json.loads(r.text)
return info['user']
get_user_info("wernergruener")
As mentioned I do not get the followers of the page. How could I do this?
Cheers,
Andi
With API/JSON:
I'm not familiar with the Instagram API, but it doesn't look like it returns detailed information about a person's followers, just the number of followers.
You should be able to get that information using info["user"]["followed_by"]["count"].
With raw page/Beautiful Soup:
Assuming the non-API page reveals the information you want about a person's followers, you'll want to download the raw HTML (instead of JSON) and parse it using Beautiful Soup.
def get_user_info( user_name):
url = "https://www.instagram.com/" + user_name
try:
r = requests.get(url)
except requests.exceptions.ConnectionError:
print ('Seems like dns lookup failed..')
time.sleep(60)
return None
if r.status_code != 200:
print ('User: ' + user_name + ' status code: ' + str(r.status_code))
print (r)
return None
soup = BeautifulSoup(r.text, 'html.parser')
# find things using Beautiful Soup
get_user_info("wernergruener")
Beautiful Soup has some of the most intuitive documentation I've ever read. I'd start there:
https://www.crummy.com/software/BeautifulSoup/bs4/doc/
With API/python-instagram:
Other people have already done a lot of the heavy lifting for you. I think python-instagram should offer you easier access to the information you want.
I'm relatively new to the facebook graph api.
I'm the owner of a business page and I want to get some information about the fan (interest - pages they likes).
How can I achieve that?
This is what I wrote until this moment:
api_endpoint = "https://graph.facebook.com/v2.10/"
page_id = "1627395454223846"
node='/'+ page_id + '/insights/page_impressions'
url = api_endpoint+node
Now I create the graph object:
graph = facebook.GraphAPI(access_token=access["token"], version = 2.10)
I have had in mind to use requests but how?
I have to use it withe the graph object.
Thanks
Using requests for this I think is preferred to be honest.
import requests
payload = {'access_token': access['token']}
api_endpoint = 'https://graph.facebook.com/v2.10'
page_id = '1627395454223846'
url = '{}/{}/insights/page_impressions'.format(api_endpoint, page_id)
resp = requests.get(url, params = payload)
...process data as you wish...
This is the url I build in the code logic.
redirect_url = "%s?first_variable=%s&second_variable=%s"%(response_url,first_value,second_value)
The response URL is built using the following code
response_url = request.build_absolute_uri(reverse('workshop:ccavenue_payment_response'))
output of this response_url is
http://localhost:8000/workshop/ccavenue/payment-response/
This is the output URL(redirect url)
http://localhost:8000/workshop/ccavenue/payment-response/%07%07%07%07%07%07%07?first_variable=xxxxxxxxxxxxxxxxxxxxxxxxxx&second_variable=encrypted_data
How can I remove %07 from my url ?
Thank You in advance.
Is that what you are looking after?
payload = {'first_variable': first_value, 'second_variable': second_value}
r = requests.get(response_url, params=payload)
try with print(r.url)
I am using an api to fetch orders from a website. The problem is at one time it only fetch only 20 orders. I figured out i need to use a pagination iterator but dont know to use it. How to fetch all the orders all at once.
My code:
def search_orders(self):
headers = {'Authorization':'Bearer %s' % self.token,'Content-Type':'application/json',}
url = "https://api.flipkart.net/sellers/orders/search"
filter = {"filter": {"states": ["APPROVED","PACKED"],},}
return requests.post(url, data=json.dumps(filter), headers=headers)
Here is a link to documentation.
Documentation
You need to do what the documentation suggests -
The first call to the Search API returns a finite number of results based on the pageSize value. Calling the URL returned in the nextPageURL field of the response gets the subsequent pages of the search result.
nextPageUrl - String - A GET call on this URL fetches the next page results. Not present for the last page
(Emphasis mine)
You can use response.json() to get the json of the response. Then you can check the flag - hasMore - to see if there are more if so, use requests.get() to get the response for next page, and keep doing this till hasMore is false. Example -
def search_orders(self):
headers = {'Authorization':'Bearer %s' % self.token,'Content-Type':'application/json',}
url = "https://api.flipkart.net/sellers/orders/search"
filter = {"filter": {"states": ["APPROVED","PACKED"],},}
s = requests.Session()
response = s.post(url, data=json.dumps(filter), headers=headers)
orderList = []
resp_json = response.json()
orderList.append(resp_json["orderItems"])
while resp_json.get('hasMore') == True:
response = s.get('"https://api.flipkart.net/sellers{0}'.format(resp_json['nextPageUrl']))
resp_json = response.json()
orderList.append(resp_json["orderItems"])
return orderList
The above code should return the complete list of orders.