I try to do a bar charts, with this code
import plotly.plotly as py
import plotly.graph_objs as go
data = [go.Bar(
x=['giraffes', 'orangutans', 'monkeys'],
y=[20, 14, 23]
)]
py.iplot(data, filename='basic-bar')
But I got this error :
PlotlyLocalCredentialsError Traceback (most recent call last)
<ipython-input-42-9eae40f28f37> in <module>()
3 y=[20, 14, 23]
4 )]
----> 5 py.iplot(data, filename='basic-bar')
C:\Users\Demonstrator\Anaconda3\lib\site-packages\plotly\plotly\plotly.py
in iplot(figure_or_data, **plot_options)
149 if 'auto_open' not in plot_options:
150 plot_options['auto_open'] = False
--> 151 url = plot(figure_or_data, **plot_options)
152
153 if isinstance(figure_or_data, dict):
C:\Users\Demonstrator\Anaconda3\lib\site-packages\plotly\plotly\plotly.py
in plot(figure_or_data, validate, **plot_options)
239
240 plot_options = _plot_option_logic(plot_options)
--> 241 res = _send_to_plotly(figure, **plot_options)
242 if res['error'] == '':
243 if plot_options['auto_open']:
C:\Users\Demonstrator\Anaconda3\lib\site-packages\plotly\plotly\plotly.py
in _send_to_plotly(figure, **plot_options)
1401 cls=utils.PlotlyJSONEncoder)
1402 credentials = get_credentials()
-> 1403 validate_credentials(credentials)
1404 username = credentials['username']
1405 api_key = credentials['api_key']
C:\Users\Demonstrator\Anaconda3\lib\site-packages\plotly\plotly\plotly.py
in validate_credentials(credentials)
1350 api_key = credentials.get('api_key')
1351 if not username or not api_key:
-> 1352 raise exceptions.PlotlyLocalCredentialsError()
1353
1354
PlotlyLocalCredentialsError:
Couldn't find a 'username', 'api-key' pair for you on your local machine. To sign in temporarily (until you stop running Python), run:
>>> import plotly.plotly as py
>>> py.sign_in('username', 'api_key')
Even better, save your credentials permanently using the 'tools' module:
>>> import plotly.tools as tls
>>> tls.set_credentials_file(username='username', api_key='api-key')
For more help, see https://plot.ly/python.
Any idea to help me please?
Thank you
You need to pay attention to the traceback in the error. In this case, it's even more helpful than usual. The solution is given to you here:
PlotlyLocalCredentialsError:
Couldn't find a 'username', 'api-key' pair for you on your local machine. To sign in temporarily (until you stop running Python), run:
>>> import plotly.plotly as py
>>> py.sign_in('username', 'api_key')
Even better, save your credentials permanently using the 'tools' module:
>>> import plotly.tools as tls
>>> tls.set_credentials_file(username='username', api_key='api-key')
For more help, see https://plot.ly/python.
So, enter your credentials used when you signed up to the site before you attempt to make a plot. You may have to sign in in a web browser and request for an API key to be generated, it is not the same as your password.
Related
I am very new in coding, just started this summer. I am trying to scrape reviews from App Store for 9 live-shopping-apps: https://www.apple.com/us/search/live-shopping?src=globalnav
I have created an xlsx-file with information about the apps and downloaded that as csv to the code, hoping that appstorescraper will identify apps through their id but it does not seem to work. Here is the code, originally retrieved from https://python.plainenglish.io/scraping-app-store-reviews-with-python-90e4117ccdfb:
import pandas as pd
# for scraping app info from App Store
from itunes_app_scraper.scraper import AppStoreScraper
# for scraping app reviews from App Store
from app_store_scraper import AppStore
# for pretty printing data structures
from pprint import pprint
# for keeping track of timing
import datetime as dt
from tzlocal import get_localzone
# for building in wait times
import random
import time
## Read in file containing app names and IDs
app_df = pd.read_csv('Data/app_.....ids.csv')
app_df.head()
app_name iOS_app_name iOS_app_id url
4 Flip - Beauty and Shopping flip-beauty-shopping 1470077137 https://apps.apple.com/us/app/flip-beauty-shop...
7 Spin Live spin-live 1519146498 https://apps.apple.com/us/app/spin-live/id1519...
1 Popshop - Live Shopping popshop-live-shopping 1009480270 https://apps.apple.com/us/app/popshop-live-sho...
5 Lalabox - Live Stream Shopping lalabox-live-stream-shopping 1496718575 https://apps.apple.com/us/app/lalabox-live-str...
6 Supergreat Beauty supergreat-beauty 1360338670 https://apps.apple.com/us/app/supergreat-beaut...
8 HERO® hero-live-shopping 1178589357 https://apps.apple.com/us/app/hero-live-shoppi...
2 Whatnot: Buy, Sell, Gov Live whatnot-buy-sell-go-live 1488269261 https://apps.apple.com/us/app/whatnot-buy-sell...
3 NTWRK - Live Video Shopping ntwrk-live-video-shopping 1425910407 https://apps.apple.com/us/app/ntwrk-live-video...
0 LIT Live - Live Shopping lit-live-live-shopping 1507315272 https://apps.apple.com/us/app/lit-live-live-sh...
## Get list of app names and app IDs
app_names = list(app_df['iOS_app_name'])
app_ids = list(app_df['iOS_app_id'])```
## Set up App Store Scraper
scraper = AppStoreScraper()
app_store_list = list(scraper.get_multiple_app_details(app_ids))
## Pretty print the data for the first app
pprint(app_store_list[0])
https://itunes.apple.com/lookup?id=1507315272&country=nl&entity=software
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/opt/anaconda3/lib/python3.8/site-packages/itunes_app_scraper/scraper.py in get_app_details(self, app_id, country, lang, flatten)
179 result = json.loads(result)
--> 180 except json.JSONDecodeError:
181 raise AppStoreException("Could not parse app store response")
IndexError: list index out of range
During handling of the above exception, another exception occurred:
AppStoreException Traceback (most recent call last)
<ipython-input-73-624146f96e92> in <module>
1 ## Set up App Store Scraper
2 scraper = AppStoreScraper()
----> 3 app_store_list = list(scraper.get_multiple_app_details(app_ids))
4
5 app = result["results"][0]
/opt/anaconda3/lib/python3.8/site-packages/itunes_app_scraper/scraper.py in get_multiple_app_details(self, app_ids, country, lang)
205 :param str lang: Dummy argument for compatibility. Unused.
206
--> 207 :return generator: A list (via a generator) of app details
208 """
209 for app_id in app_ids:
/opt/anaconda3/lib/python3.8/site-packages/itunes_app_scraper/scraper.py in get_app_details(self, app_id, country, lang, flatten)
180 except json.JSONDecodeError:
181 raise AppStoreException("Could not parse app store response")
--> 182
183 try:
184 app = result["results"][0]
AppStoreException: No app found with ID 1507315272```
This is where I am stuck. It seems to be a simple problem but my experience is very limited. The url that App Store scraper use is not the same I used to retrieve app-ids from. Could this be the concern? Please help me to solve it. Thank you in advance!
Why do i get an attribute error when i run this code in jupyter ? I am trying to figure out how to use Neurokit.
Ive tried to look through the modules one by one, but i seem to find the error.
import neurokit as nk
import pandas as pd
import numpy as np
import sklearn
df = pd.read_csv("https://raw.githubusercontent.com/neuropsychology/NeuroKit.py/master/examples/Bio/bio_100Hz.csv")
# Process the signals
bio = nk.bio_process(ecg=df["ECG"], rsp=df["RSP"], eda=df["EDA"], add=df["Photosensor"], sampling_rate=1000 )
Output Message:
AttributeError Traceback (most recent call last)
<ipython-input-2-ad0abf8de45e> in <module>
11
12 # Process the signals
---> 13 bio = nk.bio_process(ecg=df["ECG"], rsp=df["RSP"], eda=df["EDA"], add=df["Photosensor"], sampling_rate=1000 )
14 # Plot the processed dataframe, normalizing all variables for viewing purpose
15 nk.z_score(bio["df"]).plot()
~\Anaconda3\lib\site-packages\neurokit\bio\bio_meta.py in bio_process(ecg, rsp, eda, emg, add, sampling_rate, age, sex, position, ecg_filter_type, ecg_filter_band, ecg_filter_frequency, ecg_segmenter, ecg_quality_model, ecg_hrv_features, eda_alpha, eda_gamma, scr_method, scr_treshold, emg_names, emg_envelope_freqs, emg_envelope_lfreq, emg_activation_treshold, emg_activation_n_above, emg_activation_n_below)
123 # ECG & RSP
124 if ecg is not None:
--> 125 ecg = ecg_process(ecg=ecg, rsp=rsp, sampling_rate=sampling_rate, filter_type=ecg_filter_type, filter_band=ecg_filter_band, filter_frequency=ecg_filter_frequency, segmenter=ecg_segmenter, quality_model=ecg_quality_model, hrv_features=ecg_hrv_features, age=age, sex=sex, position=position)
126 processed_bio["ECG"] = ecg["ECG"]
127 if rsp is not None:
~\Anaconda3\lib\site-packages\neurokit\bio\bio_ecg.py in ecg_process(ecg, rsp, sampling_rate, filter_type, filter_band, filter_frequency, segmenter, quality_model, hrv_features, age, sex, position)
117 # ===============
118 if quality_model is not None:
--> 119 quality = ecg_signal_quality(cardiac_cycles=processed_ecg["ECG"]["Cardiac_Cycles"], sampling_rate=sampling_rate, rpeaks=processed_ecg["ECG"]["R_Peaks"], quality_model=quality_model)
120 processed_ecg["ECG"].update(quality)
121 processed_ecg["df"] = pd.concat([processed_ecg["df"], quality["ECG_Signal_Quality"]], axis=1)
~\Anaconda3\lib\site-packages\neurokit\bio\bio_ecg.py in ecg_signal_quality(cardiac_cycles, sampling_rate, rpeaks, quality_model)
355
356 if quality_model == "default":
--> 357 model = sklearn.externals.joblib.load(Path.materials() + 'heartbeat_classification.model')
358 else:
359 model = sklearn.externals.joblib.load(quality_model)
AttributeError: module 'sklearn' has no attribute 'externals'
You could downgrade you scikit-learn version if you don't need the most recent fixes using
pip install scikit-learn==0.20.1
There is an issue to fix this problem in future version:
https://github.com/neuropsychology/NeuroKit.py/issues/101
I'm executing the exact same code as you and run into the same problem.
I followed the link indicated by Louis MAYAUD and there they suggest to just add
from sklearn.externals import joblib
That solves everything and you don't need to downgrade scikit-learn version
Happy code! :)
I'm trying to port some working VBS code into Python, to analyze a collection of Word files. I was hoping that the comtypes would allow me to reuse most of my code, but I get an error when a Word instance opens a file:
ValueError: NULL COM pointer access
In [2]: from comtypes.client import CreateObject
In [3]: objWord = CreateObject("Word.Application")
In [4]: objWord.Visible = False
In [5]: objDoc = objWord.Documents.Open('my_file.docx')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-c1e34bdd2b13> in <module>
----> 1 objDoc = objWord.Documents.Open('my_file.docx')
c:\program files\python37\lib\site-packages\comtypes\_meta.py in _wrap_coclass(self)
11 itf = self._com_interfaces_[0]
12 punk = cast(self, POINTER(itf))
---> 13 result = punk.QueryInterface(itf)
14 result.__dict__["__clsid"] = str(self._reg_clsid_)
15 return result
c:\program files\python37\lib\site-packages\comtypes\__init__.py in QueryInterface(self, interface, iid)
1156 if iid is None:
1157 iid = interface._iid_
-> 1158 self.__com_QueryInterface(byref(iid), byref(p))
1159 clsid = self.__dict__.get('__clsid')
1160 if clsid is not None:
ValueError: NULL COM pointer access
I would expect to get a document object that I can then read:
nbpages = objDoc.Range.Information(4)
Seems like I needed to provide the full, absolute path to the file. Maybe the Python working folder isn't passed on to the COM object.
import datetime as dt
import matplotlib.pyplot as plt
from matplotlib import style
import pandas as pd
import pandas_datareader.data as web
style.use('ggplot')
start = dt.datetime(2000,1,1)
end = dt.datetime(2016,12,31)
df = web.DataReader('INPX', 'yahoo', start, end)
ImmediateDeprecationError Traceback (most recent call last)
<ipython-input-11-d0b9e16fb581> in <module>()
----> 1 df = web.DataReader('INPX', 'yahoo', start, end)
/anaconda3/lib/python3.6/site-packages/pandas_datareader/data.py in DataReader(name, data_source, start, end, retry_count, pause, session, access_key)
289 """
290 if data_source == "yahoo":
--> 291 raise ImmediateDeprecationError(DEP_ERROR_MSG.format('Yahoo Daily'))
292 return YahooDailyReader(symbols=name, start=start, end=end,
293 adjust_price=False, chunksize=25,
ImmediateDeprecationError:
Yahoo Daily has been immediately deprecated due to large breaks in the API without the
introduction of a stable replacement. Pull Requests to re-enable these data
connectors are welcome.
See https://github.com/pydata/pandas-datareader/issues
I tried the link but I couldn't find the reason why there is an immediate depreciation error. I also tried changing 'yahoo' to 'google' ie:df = web.DataReader('INPX', 'google', start, end) but there is still an error:
/anaconda3/lib/python3.6/site-packages/pandas_datareader/google/daily.py:40: UnstableAPIWarning:
The Google Finance API has not been stable since late 2017. Requests seem
to fail at random. Failure is especially common when bulk downloading.
warnings.warn(UNSTABLE_WARNING, UnstableAPIWarning)
RemoteDataError Traceback (most recent call last)
<ipython-input-12-5d16a3e9b68a> in <module>()
----> 1 df = web.DataReader('INPX', 'google', start, end)
/anaconda3/lib/python3.6/site-packages/pandas_datareader/data.py in DataReader(name, data_source, start, end, retry_count, pause, session, access_key)
313 chunksize=25,
314 retry_count=retry_count, pause=pause,
--> 315 session=session).read()
316
317 elif data_source == "iex":
/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py in read(self)
204 if isinstance(self.symbols, (compat.string_types, int)):
205 df = self._read_one_data(self.url,
--> 206 params=self._get_params(self.symbols))
207 # Or multiple symbols, (e.g., ['GOOG', 'AAPL', 'MSFT'])
208 elif isinstance(self.symbols, DataFrame):
/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py in _read_one_data(self, url, params)
82 """ read one data from specified URL """
83 if self._format == 'string':
---> 84 out = self._read_url_as_StringIO(url, params=params)
85 elif self._format == 'json':
86 out = self._get_response(url, params=params).json()
/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py in _read_url_as_StringIO(self, url, params)
93 Open url (and retry)
94 """
---> 95 response = self._get_response(url, params=params)
96 text = self._sanitize_response(response)
97 out = StringIO()
/anaconda3/lib/python3.6/site-packages/pandas_datareader/base.py in _get_response(self, url, params, headers)
153 msg += '\nResponse Text:\n{0}'.format(last_response_text)
154
--> 155 raise RemoteDataError(msg)
156
157 def _get_crumb(self, *args):
RemoteDataError: Unable to read URL: https://finance.google.com/finance/historical?q=INPX&startdate=Jan+01%2C+2000&enddate=Dec+31%2C+2016&output=csv
Response Text:
b'Sorry... body { font-family: verdana, arial, sans-serif; background-color: #fff; color: #000; }GoogleSorry...We\'re sorry...... but your computer or network may be sending automated queries. To protect our users, we can\'t process your request right now.See Google Help for more information.Google Home'.
Thankyou so much for helping!
A small change as discussed here worked for me. Just use
import pandas_datareader.data as web
sp500 = web.get_data_yahoo('SPY', start=start, end=end)
The error is self-explanatory; the Yahoo API has changed, so the old Pandas code to read from Yahoo's API no longer works. Have you read this discussion about the API change and its impact on Pandas? Essentially, Pandas can't read the new Yahoo API, and it will take a long time to write new code, so the temporary solution is to raise an ImmediateDeprecationError every time someone tries to use Pandas for the Yahoo API.
it is obvious that the api get_data_yahoo, goes wrong.
Here is my solution:
First, install fix_yahoo_finance:
pip install fix_yahoo_finance --upgrade --no-cache-dir
next, before you use the api, insert the code:
import fix_yahoo_finance as yf
yf.pdr_override()
Best wishes!
I am running below code in Jupyter:
import plotly.plotly as py
import plotly.graph_objs as go
# Create random data with numpy
import numpy as np
N = 100
random_x = np.linspace(0, 1, N)
random_y0 = np.random.randn(N)+5
random_y1 = np.random.randn(N)
random_y2 = np.random.randn(N)-5
# Create traces
trace0 = go.Scatter(
x = random_x,
y = random_y0,
mode = 'markers',
name = 'markers'
)
trace1 = go.Scatter(
x = random_x,
y = random_y1,
mode = 'lines+markers',
name = 'lines+markers'
)
trace2 = go.Scatter(
x = random_x,
y = random_y2,
mode = 'lines',
name = 'lines'
)
data = [trace0, trace1, trace2]
# Plot and embed in ipython notebook!
py.iplot(data, filename='scatter-mode')
I got error result as:
/Library/Python/2.7/site-packages/requests/packages/urllib3/util/ssl_.py:315:
SNIMissingWarning: An HTTPS request has been made, but the SNI
(Subject Name Indication) extension to TLS is not available on this
platform. This may cause the server to present an incorrect TLS
certificate, which can cause validation failures. For more
information, see
https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning.
SNIMissingWarning
/Library/Python/2.7/site-packages/requests/packages/urllib3/util/ssl_.py:120:
InsecurePlatformWarning: A true SSLContext object is not available.
This prevents urllib3 from configuring SSL appropriately and may cause
certain SSL connections to fail. For more information, see
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
/Library/Python/2.7/site-packages/requests/packages/urllib3/util/ssl_.py:120:
InsecurePlatformWarning:
A true SSLContext object is not available. This prevents urllib3 from
configuring SSL appropriately and may cause certain SSL connections to
fail. For more information, see
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
--------------------------------------------------------------------------- AttributeError Traceback (most recent call
last) in ()
34
35 # Plot and embed in ipython notebook!
---> 36 py.iplot(data, filename='scatter-mode')
/Library/Python/2.7/site-packages/plotly/plotly/plotly.pyc in
iplot(figure_or_data, **plot_options)
173 embed_options['height'] = str(embed_options['height']) + 'px'
174
--> 175 return tools.embed(url, **embed_options)
176
177
/Library/Python/2.7/site-packages/plotly/tools.pyc in
embed(file_owner_or_url, file_id, width, height)
407 else:
408 url = file_owner_or_url
--> 409 return PlotlyDisplay(url, width, height)
410 else:
411 if (get_config_defaults()['plotly_domain']
/Library/Python/2.7/site-packages/plotly/tools.pyc in init(self,
url, width, height) 1382 def init(self, url, width,
height): 1383 self.resource = url
-> 1384 self.embed_code = get_embed(url, width=width, height=height) 1385 super(PlotlyDisplay,
self).init(data=self.embed_code) 1386
/Library/Python/2.7/site-packages/plotly/tools.pyc in
get_embed(file_owner_or_url, file_id, width, height)
313 "\nRun help on this function for more information."
314 "".format(url, plotly_rest_url))
--> 315 urlsplit = six.moves.urllib.parse.urlparse(url)
316 file_owner = urlsplit.path.split('/')[1].split('~')[1]
317 file_id = urlsplit.path.split('/')[2]
AttributeError: 'Module_six_moves_urllib_parse' object has no
attribute 'urlparse'
I have tried everything to fix it via this thread
Attribute Error trying to run Gmail API quickstart in Python
I did the export PYTHONPATH=/Library/Python/2.7/site-packages and make sure I unset it first to blank (yes, that path exists on my Mac).
I updated w3lib (1.13.0) and six (1.10.0)
Jupyter 4.0.6 and Python 2.7.6
What else could go wrong? Please help.
I realized I picked a wrong kernel in Jupyter. So if it is PySpark kernel, it gave me error. If I use Python2 or Python3 kernel, it's fine.