AttributeError: 'str' object has no attribute '_historical_klines' - python

I am brand new to coding bots and coding in general. I copied a simple bot tutorial for beginners. The follwing part is for getting historical data of crypto stocks:
def gethourlydata(symbol):
frame = pd.DataFrame(Client.get_historical_klines(symbol,
'1hr',
'now UTC',
'25 hours ago UTC'))
frame = frame.iloc[:,:5]
frame.columns = ['Time','Open','High','Low','Close']
frame[['Open','High','Low','Close']] = frame[['Open','High','Low','Close']].astype(float)
frame.Time = pd.to_datetime(frame.Time, unit='ms')
return frame
First I had to put in a start_str because it was supposedly missing. I did so,executed the function for 'BTCUSDT', and got this:
AttributeError Traceback (most recent call last)
/tmp/ipykernel_1473/2916929938.py in <module>
----> 1 df = gethourlydata('BTCUSDT')
/tmp/ipykernel_1473/2893431243.py in gethourlydata(symbol)
3 '1hr',
4 'now UTC',
----> 5 '25 hours ago UTC'))
6 frame = frame.iloc[:,:5]
7 frame.columns = ['Time','Open','High','Low','Close']
~/.local/lib/python3.7/site-packages/binance/client.py in get_historical_klines(self, symbol, interval, start_str, end_str, limit, klines_type)
930
931 """
--> 932 return self._historical_klines(symbol, interval, start_str, end_str=end_str, limit=limit, klines_type=klines_type)
933
934 def _historical_klines(self, symbol, interval, start_str, end_str=None, limit=500,
AttributeError: 'str' object has no attribute '_historical_klines'
I have tried many different methods, e.g. defining 'self','klines_type',etc. in detail, and still some error appears. All I'm to do is prove to myself that I can at least run a bot for on my jupyter notebook.
Could someone please help or at least give so tips?
Thank you!

You firstly have to initialize client
try this -
from binance.client import Client
my_client = Client("","") # for this operation you dont need to use keys
my_client.get_historical_klines((symbol,'1hr','now UTC','25 hours ago UTC'))

Related

App-Store-Review-Scraping: AppStoreScraper does not recognise app-id

I am very new in coding, just started this summer. I am trying to scrape reviews from App Store for 9 live-shopping-apps: https://www.apple.com/us/search/live-shopping?src=globalnav
I have created an xlsx-file with information about the apps and downloaded that as csv to the code, hoping that appstorescraper will identify apps through their id but it does not seem to work. Here is the code, originally retrieved from https://python.plainenglish.io/scraping-app-store-reviews-with-python-90e4117ccdfb:
import pandas as pd
# for scraping app info from App Store
from itunes_app_scraper.scraper import AppStoreScraper
# for scraping app reviews from App Store
from app_store_scraper import AppStore
# for pretty printing data structures
from pprint import pprint
# for keeping track of timing
import datetime as dt
from tzlocal import get_localzone
# for building in wait times
import random
import time
## Read in file containing app names and IDs
app_df = pd.read_csv('Data/app_.....ids.csv')
app_df.head()
app_name iOS_app_name iOS_app_id url
4 Flip - Beauty and Shopping flip-beauty-shopping 1470077137 https://apps.apple.com/us/app/flip-beauty-shop...
7 Spin Live spin-live 1519146498 https://apps.apple.com/us/app/spin-live/id1519...
1 Popshop - Live Shopping popshop-live-shopping 1009480270 https://apps.apple.com/us/app/popshop-live-sho...
5 Lalabox - Live Stream Shopping lalabox-live-stream-shopping 1496718575 https://apps.apple.com/us/app/lalabox-live-str...
6 Supergreat Beauty supergreat-beauty 1360338670 https://apps.apple.com/us/app/supergreat-beaut...
8 HERO® hero-live-shopping 1178589357 https://apps.apple.com/us/app/hero-live-shoppi...
2 Whatnot: Buy, Sell, Gov Live whatnot-buy-sell-go-live 1488269261 https://apps.apple.com/us/app/whatnot-buy-sell...
3 NTWRK - Live Video Shopping ntwrk-live-video-shopping 1425910407 https://apps.apple.com/us/app/ntwrk-live-video...
0 LIT Live - Live Shopping lit-live-live-shopping 1507315272 https://apps.apple.com/us/app/lit-live-live-sh...
## Get list of app names and app IDs
app_names = list(app_df['iOS_app_name'])
app_ids = list(app_df['iOS_app_id'])```
## Set up App Store Scraper
scraper = AppStoreScraper()
app_store_list = list(scraper.get_multiple_app_details(app_ids))
## Pretty print the data for the first app
pprint(app_store_list[0])
https://itunes.apple.com/lookup?id=1507315272&country=nl&entity=software
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/opt/anaconda3/lib/python3.8/site-packages/itunes_app_scraper/scraper.py in get_app_details(self, app_id, country, lang, flatten)
179 result = json.loads(result)
--> 180 except json.JSONDecodeError:
181 raise AppStoreException("Could not parse app store response")
IndexError: list index out of range
During handling of the above exception, another exception occurred:
AppStoreException Traceback (most recent call last)
<ipython-input-73-624146f96e92> in <module>
1 ## Set up App Store Scraper
2 scraper = AppStoreScraper()
----> 3 app_store_list = list(scraper.get_multiple_app_details(app_ids))
4
5 app = result["results"][0]
/opt/anaconda3/lib/python3.8/site-packages/itunes_app_scraper/scraper.py in get_multiple_app_details(self, app_ids, country, lang)
205 :param str lang: Dummy argument for compatibility. Unused.
206
--> 207 :return generator: A list (via a generator) of app details
208 """
209 for app_id in app_ids:
/opt/anaconda3/lib/python3.8/site-packages/itunes_app_scraper/scraper.py in get_app_details(self, app_id, country, lang, flatten)
180 except json.JSONDecodeError:
181 raise AppStoreException("Could not parse app store response")
--> 182
183 try:
184 app = result["results"][0]
AppStoreException: No app found with ID 1507315272```
This is where I am stuck. It seems to be a simple problem but my experience is very limited. The url that App Store scraper use is not the same I used to retrieve app-ids from. Could this be the concern? Please help me to solve it. Thank you in advance!

missing variable to complete cell execution

I hope someone can help me. I've been stuck with this error for a while. I have two .py files that I am importing in a jupyter notebook. When running the last cell (see code below) I get a Traceback error I can'f fix.
I think there is some error in my ch_data_prep.py file related to variable df_ch not correctly passed between files. Is this possible? Any suggestion on how to solve this problem?
Thanks!
ch_data_prep.py
def seg_data(self):
seg_startdate = input('Enter start date (yyyy-mm-dd): ')
seg_finishdate = input('Enter end date (yyyy-mm-dd): ')
df_ch_seg = df_ch[(df_ch['event_datetime'] > seg_startdate)
& (df_ch['event_datetime'] < seg_finishdate)
]
return df_ch_seg
df_ch_seg = seg_data(df_ch)
ch_db.py
def get_data():
# Some omitted code here to connect to database and get data...
df_ch = pd.DataFrame(result)
return df_ch
df_ch = get_data()
Jupyter Notebook data_analysis.ipynb
In[1]: import ch_db
df_ch = ch_db.get_data()
In[2]: import ch_data_prep
When I run cell 2, I get this error
--------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-fbe2d4fceba6> in <module>
----> 1 import ch_data_prep
~/clickstream/ch_data_prep.py in <module>
34 return df_ch_seg
35
---> 36 df_ch_seg = seg_data()
TypeError: seg_data() missing 1 required positional argument: 'df_ch'

How to solve AttributeError: 'Top2Vec' object has no attribute 'topic_sizes'?

I am trying to work on top2vec model. When I run below lines of code I encountered
AttributeError: 'Top2Vec' object has no attribute 'topic_sizes'
documents, document_scores, document_nums = top2vec.search_documents_by_topic(topic_num=344, num_docs=2)
result_df = Articles_df.loc[document_nums]
result_df["document_scores"] = document_scores
for index,row in result_df.iterrows():
print(f"Document: {index}, Score: {row.document_scores}")
print(f"Date: {row.Date}")
print(f"Headline: {row.Headline}")
print("-----------")
print(row.Artciles)
print("-----------")
print()
Even though these lines have nowhere topic_sizes. For a full understanding, I am also providing whole shell of the error message.
AttributeError Traceback (most recent call
last) in
----> 1 documents, document_scores, document_nums = top2vec.search_documents_by_topic(topic_num=344, num_docs=2)
2
3 result_df = Articles_df.loc[document_nums]
4 result_df["document_scores"] = document_scores
5
~/PycharmProjects/News/venv/lib/python3.7/site-packages/top2vec/Top2Vec.py
in search_documents_by_topic(self, topic_num, num_docs,
return_documents, reduced)
983
984 self._validate_topic_num(topic_num, reduced)
--> 985 self._validate_topic_search(topic_num, num_docs, reduced)
986
987 topic_document_indexes = np.where(self.doc_top == topic_num)[0]
~/PycharmProjects/News/venv/lib/python3.7/site-packages/top2vec/Top2Vec.py
in _validate_topic_search(self, topic_num, num_docs, reduced)
489 f" only has {self.topic_sizes_reduced[topic_num]} documents.")
490 else:
--> 491 if num_docs > self.topic_sizes[topic_num]:
492 raise ValueError(f"Invalid number of documents: original topic {topic_num}"
493 f" only has {self.topic_sizes[topic_num]} documents.")
AttributeError: 'Top2Vec' object has no attribute 'topic_sizes'
I am trying to use the pre-train model for top2vec and trying to analyze my own dataset.
I would appreciate any solutions or suggestions.
I tried your code on my data set and it is working, but I have 2 topics (0,1) and 796 of documnets are in topic 1 I did it like this: (otherwise I get error regarding number of topics and documents.)
documents, document_scores, document_nums = modelx.search_documents_by_topic(topic_num=1,num_docs=796)
Other rows are just like your code.
output:
Document: 1468, Score: 0.3702481687068939
topic id : 2
Topic true name: mideast
Legality of the Jewish Purchase (was Israeli Expansion-lust) Right now, I'm just going to address this point
Document: 1635, Score: 0.3487136960029602
topic id : 0
Topic true name: x
Pulldown menu periodically hangs application on OpenWindows 3.0 : : Has anyone found a fix for the following problem?: : Client Software: SunOs 4

Why do i get an Attribute Error when using Neurokit?

Why do i get an attribute error when i run this code in jupyter ? I am trying to figure out how to use Neurokit.
Ive tried to look through the modules one by one, but i seem to find the error.
import neurokit as nk
import pandas as pd
import numpy as np
import sklearn
df = pd.read_csv("https://raw.githubusercontent.com/neuropsychology/NeuroKit.py/master/examples/Bio/bio_100Hz.csv")
# Process the signals
bio = nk.bio_process(ecg=df["ECG"], rsp=df["RSP"], eda=df["EDA"], add=df["Photosensor"], sampling_rate=1000 )
Output Message:
AttributeError Traceback (most recent call last)
<ipython-input-2-ad0abf8de45e> in <module>
11
12 # Process the signals
---> 13 bio = nk.bio_process(ecg=df["ECG"], rsp=df["RSP"], eda=df["EDA"], add=df["Photosensor"], sampling_rate=1000 )
14 # Plot the processed dataframe, normalizing all variables for viewing purpose
15 nk.z_score(bio["df"]).plot()
~\Anaconda3\lib\site-packages\neurokit\bio\bio_meta.py in bio_process(ecg, rsp, eda, emg, add, sampling_rate, age, sex, position, ecg_filter_type, ecg_filter_band, ecg_filter_frequency, ecg_segmenter, ecg_quality_model, ecg_hrv_features, eda_alpha, eda_gamma, scr_method, scr_treshold, emg_names, emg_envelope_freqs, emg_envelope_lfreq, emg_activation_treshold, emg_activation_n_above, emg_activation_n_below)
123 # ECG & RSP
124 if ecg is not None:
--> 125 ecg = ecg_process(ecg=ecg, rsp=rsp, sampling_rate=sampling_rate, filter_type=ecg_filter_type, filter_band=ecg_filter_band, filter_frequency=ecg_filter_frequency, segmenter=ecg_segmenter, quality_model=ecg_quality_model, hrv_features=ecg_hrv_features, age=age, sex=sex, position=position)
126 processed_bio["ECG"] = ecg["ECG"]
127 if rsp is not None:
~\Anaconda3\lib\site-packages\neurokit\bio\bio_ecg.py in ecg_process(ecg, rsp, sampling_rate, filter_type, filter_band, filter_frequency, segmenter, quality_model, hrv_features, age, sex, position)
117 # ===============
118 if quality_model is not None:
--> 119 quality = ecg_signal_quality(cardiac_cycles=processed_ecg["ECG"]["Cardiac_Cycles"], sampling_rate=sampling_rate, rpeaks=processed_ecg["ECG"]["R_Peaks"], quality_model=quality_model)
120 processed_ecg["ECG"].update(quality)
121 processed_ecg["df"] = pd.concat([processed_ecg["df"], quality["ECG_Signal_Quality"]], axis=1)
~\Anaconda3\lib\site-packages\neurokit\bio\bio_ecg.py in ecg_signal_quality(cardiac_cycles, sampling_rate, rpeaks, quality_model)
355
356 if quality_model == "default":
--> 357 model = sklearn.externals.joblib.load(Path.materials() + 'heartbeat_classification.model')
358 else:
359 model = sklearn.externals.joblib.load(quality_model)
AttributeError: module 'sklearn' has no attribute 'externals'
You could downgrade you scikit-learn version if you don't need the most recent fixes using
pip install scikit-learn==0.20.1
There is an issue to fix this problem in future version:
https://github.com/neuropsychology/NeuroKit.py/issues/101
I'm executing the exact same code as you and run into the same problem.
I followed the link indicated by Louis MAYAUD and there they suggest to just add
from sklearn.externals import joblib
That solves everything and you don't need to downgrade scikit-learn version
Happy code! :)

Pulling stock option data with python pandas - broke overnight

Last night I typed up the following
from pandas.io.data import Options
import csv
symList = []
optData = {}
with open('C:/optionstrade/symbols.txt') as symfile:
symreader = csv.reader(symfile, delimiter=',')
for row in symreader:
symList = row
for symbol in symList:
temp = Options(symbol,'yahoo')
try:
optData[symbol] = temp.get_all_data()
except:
pass
It worked alright. I only got data from 200 something of the 400 something symbols I have in the file, but it pulled the options data for those 200 something just fine.
This morning, I go to run the code again (markets have been open for nearly an hour) and I get nothing:
In [6]: len(optData)
Out[6]: 0
So I run a bit of a test:
test = Options('AIG','yahoo')
spam = test.get_all_data()
import pickle
with open('C:/optionstrade/test.txt','w') as testfile:
pickle.dump(test,testfile)
I get this error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-4-902aa7c31f7e> in <module>()
1 test = Options('AIG','yahoo')
----> 2 spam = test.get_all_data()
C:\Anaconda\lib\site-packages\pandas\io\data.pyc in get_all_data(self, call, put)
1109
1110 for month in months:
-> 1111 m2 = month.month
1112 y2 = month.year
1113
AttributeError: 'str' object has no attribute 'month'
And this content of the pickled file:
ccopy_reg
_reconstructor
p0
(cpandas.io.data
Options
p1
c__builtin__
object
p2
Ntp3
Rp4
(dp5
S'symbol'
p6
S'AIG'
p7
sb.
Nothing has changed overnight on my end... last thing I did was save and shut down. First thing I did after waking up was run it again.

Categories