I am using GitHub API to plot the most stared projects. This is the code:
import requests
import pygal
from pygal.style import LightColorizedStyle as LCS, LightenStyle as LS
path = 'https://api.github.com/search/repositories?q=language:python&sort=stars'
r = requests.get(path)
r_py = r.json()
repo_dicts = r_py['items']
names, plot_dicts = [], []
for repo_dict in repo_dicts:
names.append(repo_dict['name'])
attr = {
'stars' : repo_dict['stargazers_count'],
'description' : repo_dict['description'] or "",
'link' : repo_dict['html_url'],
}
plot_dicts.append(attr)
chart = pygal.Bar(x_label_rotation=45, show_legend=False)
chart.x_labels = names
chart.add('', plot_dicts)
chart.title = 'Most popular Python Projects on GitHub'
chart.render_to_file('PopPy.svg')
When I run rendered file (PopPy.svg) it shows the following chart with no data:
I double checked the API and see if I am using wrong key for dictionaries. Everything seems to be correct but still it doesn't work as expected.
Please help me with this thank you.
Related
Hello does anyone has an idea why this piece of python code wont work for me.
I am trying to get a piece of api data into a python local variable and it just wont seem to work
.I tried different solutions
from urllib import request
import requests
import datetime
#******************************** VARIABLES *****************************#
SHEET_ENDPOINT = "https://api.sheety.co/c2508c72b1a9443966fca6445ff27747/workoutTracker/workouts"
#******************************** GET SHEETY API *****************************#
inbound = requests.get(SHEET_ENDPOINT)
RESULT = inbound.json()
drum = []
for exercise in RESULT["workouts"]:
date = exercise["date"]
time = exercise["time"]
exercise = exercise["exercise"]
duration = exercise["duration"]
calories = exercise["calories"]
g = f"Date:{date}\nTime:{time}\nExercise:{name}\nDuration:{duration}\nCalories:{calories}"
drum.append(g)
for item in drum:
print(item)
I was browsing for the question in the title and none of the proposed answers worked for me. I saved three Jupyter notebooks with all my notes on a topic, and now they seem to be html files. I saved them using Save Notebook As... in Jupyter, so if anyone can tell me if this is right, that would be much appreciated.
The problem I have now is that I need my notes (both markdown and code) to check some details and I don't seem to be able to open the file as a notebook, only as a plain text html file.
Is there any way to get it back to a .IPYNB file? None of the options that I could find worked for me. I tried running this code, but had no success:
from bs4 import BeautifulSoup
import json
import urllib.request
#url = 'http://nbviewer.jupyter.org/url/jakevdp.github.com/downloads/notebooks/XKCD_plots.ipynb'
#response = urllib.request.urlopen(url)
# for local html file
response = open("C:/Users/Usuario/Desktop/MICHEL/DATA SCIENCE/WEB SCRAPING (UDEMY)/WEB SCRAPING PAGINA ESTATICA/LXML/web_scraping_con_lxml")
text = response.read()
soup = BeautifulSoup(text, 'lxml')
# see some of the html
print(soup.div)
dictionary = {'nbformat': 4, 'nbformat_minor': 1, 'cells': [], 'metadata': {}}
for d in soup.findAll("div"):
if 'class' in d.attrs.keys():
for clas in d.attrs["class"]:
if clas in ["text_cell_render", "input_area"]:
# code cell
if clas == "input_area":
cell = {}
cell['metadata'] = {}
cell['outputs'] = []
cell['source'] = [d.get_text()]
cell['execution_count'] = None
cell['cell_type'] = 'code'
dictionary['cells'].append(cell)
else:
cell = {}
cell['metadata'] = {}
cell['source'] = [d.decode_contents()]
cell['cell_type'] = 'markdown'
dictionary['cells'].append(cell)
open('notebook.ipynb', 'w').write(json.dumps(dictionary))
The file is in the location that you can see in the program. Does anyone know how can I fix this and how can I prevent it from happening again?
Sorry, my bad. I just asked a professor and he showed me that my computer was saving the code in a different format from .ipynb . I simply switched the file extension manually to .ipynb to solve the problem. Now I can open the files again with the notebook and get the regular format.
Thanks!
My script works in spyder, but when I try and run it in task schedular (using a bat file) it won't action. I believe the reason to be an issue with the URL request and that I might need to use URL Parser... Not sure how this would work for my code though. (API key is not mine but tester)
import urllib.request as request
import json
import pandas as pd
ETFS = ["VTI"]
def Name (ticker):
url = "https://eodhistoricaldata.com/api/fundamentals/{ticker}.US?api_token=OeAFFmMliFG5orCUuwAKQ8l4WWFQ67YX".format(ticker=ticker)
with request.urlopen(url) as response:
source = response.read()
data = json.loads(source)
type(data.get('General',{}).get('Name', []))
len(data.get('General',{}).get('Name', []))
Name = (data.get('General',{}).get('Name', []))
return (Name)
resultname = list(map(Name, ETFS))
d = { "Ticker": ETFS, 'Name': resultname}
dfx = pd.DataFrame(data=d)
dfx.to_excel("Equity ETFs.xlsx")
I am getting an error when trying to extract historic gold prices using get_currency_exchange_daily for gold spot prices.
Here is an example of a python code that raises an error ValueError: Invalid API call. Please retry or visit the documentation (https://www.alphavantage.co/documentation/) for FX_DAILY.
from alpha_vantage.foreignexchange import ForeignExchange
# Your key here
key = 'XXX'
ts = ForeignExchange(key)
price, meta = ts.get_currency_exchange_daily(from_symbol='XAU', to_symbol='USD')
While this python code executes just fine
from alpha_vantage.foreignexchange import ForeignExchange
# Your key here
key = 'XXX'
ts = ForeignExchange(key)
price, meta = ts.get_currency_exchange_rate(from_currency='XAU', to_currency='USD')
I'm running it with Python 3.8.2 from Anaconda environment, although it should not make a difference. Would appreciate help.
I believe the issue was resolved for me using intra-day data and finding a specific API:
from alpha_vantage.timeseries import TimeSeries
ts = TimeSeries(key='xxx',output_format='pandas')
data, meta_data = ts.get_intraday(symbol='RGLDOU.SWI',interval='1min', outputsize='full')
I have been unable to overcome this error while trying to add a video to my playlist using the youtube gdata python api.
gdata.service.RequestError: {'status':
400, 'body': 'Invalid request URI',
'reason': 'Bad Request'}
This seems to be the same error, but there are no solutions as yet. Any help guys?
import getpass
import gdata.youtube
import gdata.youtube.service
yt_service = gdata.youtube.service.YouTubeService()
# The YouTube API does not currently support HTTPS/SSL access.
yt_service.ssl = False
yt_service = gdata.youtube.service.YouTubeService()
yt_service.email = #myemail
yt_service.password = getpass.getpass()
yt_service.developer_key = #mykey
yt_service.source = #text
yt_service.client_id= #text
yt_service.ProgrammaticLogin()
feed = yt_service.GetYouTubePlaylistFeed(username='default')
# iterate through the feed as you would with any other
for entry in feed.entry:
if (entry.title.text == "test"):
lst = entry;
print entry.title.text, entry.id.text
custom_video_title = 'my test video on my test playlist'
custom_video_description = 'this is a test video on my test playlist'
video_id = 'Ncakifd_16k'
playlist_uri = lst.id.text
playlist_video_entry = yt_service.AddPlaylistVideoEntryToPlaylist(playlist_uri, video_id, custom_video_title, custom_video_description)
if isinstance(playlist_video_entry, gdata.youtube.YouTubePlaylistVideoEntry):
print 'Video added'
The confounding thing is that updating the playlist works, but adding a video does not.
playlist_entry_id = lst.id.text.split('/')[-1]
original_playlist_description = lst.description.text
updated_playlist = yt_service.UpdatePlaylist(playlist_entry_id,'test',original_playlist_description,playlist_private=False)
The video_id is not wrong because its the video from the sample code. What am I missing here? Somebody help!
Thanks.
Gdata seems to use v1 API. So, the relevant documentation is here: http://code.google.com/apis/youtube/1.0/developers_guide_protocol.html#Retrieving_a_playlist
This means, your "playlist_uri" should not take the value of "lst.id.text", but should take the "feedLink" element's "href" attribute in order to be used with "AddPlaylistVideoEntryToPlaylist"
Even if you happen to use v2 API, you should take the URI from the "content" element's "src" attribute as explained in the documentation, you get by substituting 2.0, in the above URL! (SO doesn't allow me to put two hyperlinks because i don't have enough reputations! :))