Refresh page when new price is available in Bottle (Python) - python

I am making a Bitcoin/Ethereum price ticker webpage in Bottle for Python for my company and I want to refresh the page when new prices are available. I am pulling the price data from an API endpoint available through the company. I have hidden the URL to this for security purposes.
templates.py
from bottle import run, get, route, template
import requests
main_api = #url to company's api
def isDataValid(json_data):
if "status" in json_data:
return True
else:
return False
def returnPrices(coin, curr):
url = main_api + coin + curr
json_data = requests.get(url).json()
prices = {}
if isDataValid(json_data) == True:
buy_price = str(json_data["data"]["buy_price"])
sell_price = str(json_data["data"]["sell_price"])
prices = [buy_price, sell_price]
else:
prices = ["Error"]
return prices
#route('/')
def index():
pricesBTC = returnPrices('BTC','USD')
pricesETH = returnPrices('ETH','USD')
btc_buy_price = pricesBTC[0]
btc_sell_price = pricesBTC[1]
eth_buy_price = pricesETH[0]
eth_sell_price = pricesETH[1]
return template('index', btc_buy_price = btc_buy_price, btc_sell_price = btc_sell_price, eth_buy_price = eth_buy_price, eth_sell_price = eth_sell_price)
run(reLoader = True, debug = True)
So how do I refresh the page everytime prices change? I think the prices for ETH and BTC don't change at the same time, so I might have to refresh whenever either of them change. Thank you.

It is not possible without some browser-side JavaScript.
For example, you create additional Bottle endpoint that provides updated data as JSON, and an in-browser script polls that data via AJAX and update the respective html page elements.
As far as JS concerned, there are too many ways to implement this functionality, from simple JQuery to JS frameworks like Angular and Vue.js.

Related

Search query keeps returning 0 for google maps API query

I'm trying to find the number of museums in each city in the UK by using the Google maps API. I keep getting a 0 search result with the following code. I thought it might be because I didn't enable billing on my Google Maps projects but I enabled billing and it still didn't work. Then I created a new API key and that didn't work either. Here is my code:
import requests
import json
api_key = ''
query = 'museums'
location = '51.509865,0.1276' # lat,lng of London
radius = 10000 # search radius in meters
url = f'https://maps.googleapis.com/maps/api/place/textsearch/json?query={query}&location={location}&radius={radius}&key={api_key}'
#url = f'https://maps.googleapis.com/maps/api/place/textsearch/json?query={query}&key={api_key}'
response = requests.get(url)
data = json.loads(response.text)
# retrieve the number of results
num_results = len(data['results'])
print(f'Number of results for "{query}" in "{location}": {num_results}')
I'm also open to trying a different method or package if that works.
And what it returns:
Number of results for "museum" in "51.509865,0.1276": 0

KeyError prices for Coingecko API

I have tested this and has implemented it into my High charts javascript file. The caveat is that when I reload the page twice it will crash due to an error.
#app.route('/') def cryptodashboard():
# Get historical price data for Bitcoin, Ethereum, and Ripple
btc_data = requests.get(
'https://api.coingecko.com/api/v3/coins/bitcoin/market_chart?vs_currency=usd&days=365').json()['prices']
eth_data = requests.get(
'https://api.coingecko.com/api/v3/coins/ethereum/market_chart?vs_currency=usd&days=365').json()['prices']
xrp_data = requests.get(
'https://api.coingecko.com/api/v3/coins/ripple/market_chart?vs_currency=usd&days=365').json()['prices']
# Get live data for Bitcoin, Ethereum, and Ripple
btc_live = requests.get(
'https://api.coingecko.com/api/v3/coins/bitcoin').json()
eth_live = requests.get(
'https://api.coingecko.com/api/v3/coins/ethereum').json()
xrp_live = requests.get(
'https://api.coingecko.com/api/v3/coins/ripple').json()
# Get market cap data for Bitcoin, Ethereum, and Ripple
btc_market_cap = btc_live['market_data']['market_cap']['usd']
eth_market_cap = eth_live['market_data']['market_cap']['usd']
xrp_market_cap = xrp_live['market_data']['market_cap']['usd']
return render_template('index.html', btc_data=(btc_data), eth_data=(eth_data), xrp_data=(xrp_data), btc_live=(btc_live), eth_live=(eth_live), xrp_live=(xrp_live), btc_market_cap=(btc_market_cap), eth_market_cap=(eth_market_cap), xrp_market_cap=(xrp_market_cap))
This is the error in the Flask Debugger, KeyError: 'prices'.
When I look at the website https://api.coingecko.com/api/v3/coins/bitcoin/market_chart?vs_currency=usd&days=365
it tells me that I have reached the API limit hence it is not able
to show the price array. What I have done is try to change the
days=365 in the API to days=2 but the problem still persists.
Please advise me how to fix this problem.

Django back-end scripts scheduling

I'm building a website with Django and among other things I want to display the latest news about a certain topic. I have a python script in the back-end that I would like to program to retrieve the latest news once every 1 hour for example. In the meantime I want to be able to display the most recently retrieved news. I'm doing this in order to avoid that the script is being activated every time someone opens my website.
My script is in news.py:
import pprint
import requests
import datetime
import pandas as pd
import dateutil.parser
secret = "********"
url = 'https://newsapi.org/v2/everything?'
quote = 'Amazon'
parameters1 = {
'q': quote,
'pageSize': 100,
'sortby': 'publishedAt',
'apiKey': secret,
}
response1 = requests.get(url, params=parameters1)
response_json1 = response1.json()
text_combined1 = []
for i in response_json1['articles']:
if i['content'] != None:
case = {'Source': i['source']['name'], 'Title': i['title'], 'url': i['url'],
'Published on:': dateutil.parser.parse(i['publishedAt']).strftime('%Y-%m-%d'), 'Image': i['urlToImage']}
text_combined1.append(case)
data_amazon = pd.DataFrame.from_dict(text_combined1)
news1 = data_amazon.iloc[0]
news2 = data_amazon.iloc[1]
news3 = data_amazon.iloc[2]
My views.py looks like this:
from django.shortcuts import render
from .news import *
def dashboard(request):
content = {'data': data, 'news1': news1, 'news2': news2, 'news3': news3}
return render(request, 'dashboard.html',
content)
I'm new to web development but my understanding as of now is that every time someone sends a request to my webpage that script would be run, which would result in delay in the display of the news and most likely access denial to the news.api due to too many requests.
Thank you in advance!
A good way to do this is with Celery. It will let you schedule tasks in Django.
You can read more about it here, and see some other options as well.
Set up a scheduled job?

Keep getting Connection Reset Error 10054 when scraping Amazon jobs results

Obviously I'm still new to Python by looking at my code but failing my way through it.
I am scraping Amazon jobs search results but keep getting a connection reset error 10054 after about 50 requests to the url. I added a Crawlera proxy network to prevent getting banned but still not working. I know the url is long but it seems to work without having to add too many other separate parts to the url. The results page has about 12,000 jobs total with 10 jobs per page, so I don't even know if scraping that much data is the problem to begin with. Amazon shows each page in the url as 'result_limit=10', so I've been going through each page by 10s instead of 1 page per request. Not sure if that's right. Also, the last page stops at 9,990.
The code works but not sure how to get passed the connection error. As you can see, I've added things like a user agent but not sure if it even does anything. Any help would be appreciated as I've been stuck on this for countless days and hours. Thanks!
def get_all_jobs(pages):
requests = 0
start_time = time()
total_runtime = datetime.now()
for page in pages:
try:
ua = UserAgent()
header = {
'User-Agent': ua.random
}
response = get('https://www.amazon.jobs/en/search.json?base_query=&city=&country=USA&county=&'
'facets%5B%5D=location&facets%5B%5D=business_category&facets%5B%5D=category&'
'facets%5B%5D=schedule_type_id&facets%5B%5D=employee_class&facets%5B%5D=normalized_location'
'&facets%5B%5D=job_function_id&job_function_id%5B%5D=job_function_corporate_80rdb4&'
'latitude=&loc_group_id=&loc_query=USA&longitude=&'
'normalized_location%5B%5D=Seattle%2C+Washington%2C+USA&'
'normalized_location%5B%5D=San+Francisco'
'%2C+California%2C+USA&normalized_location%5B%5D=Sunnyvale%2C+California%2C+USA&'
'normalized_location%5B%5D=Bellevue%2C+Washington%2C+USA&'
'normalized_location%5B%5D=East+Palo+Alto%2C+California%2C+USA&'
'normalized_location%5B%5D=Santa+Monica%2C+California%2C+USA&offset={}&query_options=&'
'radius=24km&region=&result_limit=10&schedule_type_id%5B%5D=Full-Time&'
'sort=relevant'.format(page),
headers=header,
proxies={
"http": "http://1ea01axxxxxxxxxxxxxxxxxxx:#proxy.crawlera.com:8010/"
}
)
# Monitor the frequency of requests
requests += 1
# Pauses the loop between 8 and 15 seconds
sleep(randint(8, 15))
current_time = time()
elapsed_time = current_time - start_time
print("Amazon Request:{}; Frequency: {} request/s; Total Run Time: {}".format(requests,
requests / elapsed_time, datetime.now() - total_runtime))
clear_output(wait=True)
# Throw a warning for non-200 status codes
if response.status_code != 200:
warn("Request: {}; Status code: {}".format(requests, response.status_code))
# Break the loop if number of requests is greater than expected
if requests > 999:
warn("Number of requests was greater than expected.")
break
yield from get_job_infos(response)
except AttributeError as e:
print(e)
continue
def get_job_infos(response):
amazon_jobs = json.loads(response.text)
for website in amazon_jobs['jobs']:
site = website['company_name']
title = website['title']
location = website['normalized_location']
job_link = 'https://www.amazon.jobs' + website['job_path']
yield site, title, location, job_link
def main():
# Page range starts from 0 and the middle value increases by 10 each page.
pages = [str(i) for i in range(0, 9990, 10)]
with open('amazon_jobs.csv', "w", newline='', encoding="utf-8") as f:
writer = csv.writer(f)
writer.writerow(["Website", "Title", "Location", "Job URL"])
writer.writerows(get_all_jobs(pages))
if __name__ == "__main__":
main()
i'm not expert on amazon anti bot policies, but if they have flagged you once, your ip could be flagged for a while, they might have a limit to how many similar requests you can do in a certain time frame.
google for a patch to urllib so you can see the request headers in real time, other than ip/domain per certain time frame, amazon will look at your request headers to determine if you're not human. compare what you're sending with a regular browser request headers
just standard practice, keep cookies for a normal amount of time, use proper referers and a popular user agent
all this can be done with requests library, pip install requests, see session object
it looks like you're sending a request to an internal amazon url without a referer header..... that doesnt happen in a normal browser
another example, keeping cookies from one user agent and then switching to another is also not what browser does

Google App Engine (Python) redirect not working

I am trying to execute a redirect but it does not seem to be happening. The XHR outputs that the page has finished loading but my page is not redirected at all. The database has the correct data that I queried for and all.
def post(self):
modcode = self.request.get("code")
email = users.get_current_user().email()
query = db.GqlQuery("SELECT * from ModuleReviews where code =:1 and email =:2", modcode, email).get()
if query != None:
self.redirect('/errorR')
else:
module = ModuleReviews(code=self.request.get("code"),text=self.request.get("review"))
module.text = self.request.get("review")
module.code = self.request.get("code")
module.ratings = self.request.get("ratings")
module.workload = self.request.get("workload")
module.diff = self.request.get("diff")
module.email = users.get_current_user().email()
module.put()
self.redirect('/display')
If you're using XHR, you will have to get your Javascript handler to do the redirect via window.location. However, since you always want a redirect, you should consider whether using Ajax is the right thing at all: just submitting via a normal POST would provide exactly the functionality you want without any Javascript needed.

Categories