RealTime data appending - Pandas - python

I am trying to do something very basic in pandas and failing miserably.
From a high level I am taking ask_size data from my broker who passes the value to me on every tick update.
I can print out the last value easily enough.
All I am trying to do is append the next ask_size amount to the previous ask_size, to the end of a df in a new row, so I can do some historical analysis.
def getTickSize():
askSize_list = [] # empty list
askSize_list.append(float(ask_size)) # getting askSize and putting it in a list
datagrab = {'ask_size': askSize_list} # creating the single column and putting askSize in
df = pd.DataFrame(datagrab) # using a pd df
print(df.tail(10))
I am then calling the function in a different part of my script
However the output always only shows the last askSize:
askSize
0 30.0
And never actually appends the real-time data
Clearly I am doing something wrong, but I am at a loss to what.
I have also tried using the ignore_index=True in a second df, refencing the first, but no joy:
askSize
0 30.0
1 30.0
I have also tried using 'for loops' but as there doesn't seem to be anything to iterate over (data is real-time) I came to a dead end
(note I will also eventually add a timestamp to each new ask_size as it is appended to the list. So only 2 columns, in the end)
Any help is much appreciated

it seems you are creating a new dataframe, not appending new data.
You could, for example, create a new dataframe that will be appended to the existing data frame with the row(s) in the same format.
Lets say you have already df created. You want to add 1 new entry that will be read as a parameter (if you need more, specify more parameters), here is a basic example:
'askSize'
1.0
2.0
def append_row(newdata, dataframe):
row = {'ask_size': [newdata]}
temp_df = pd.DataFrame(row)
# merge original dataframe with temp_df
merged_df = pd.concat([dataframe, temp_df])
return merged_df
df = append_row("5.1", df) # this will overwrite your original df
'askSize'
1.0
2.0
5.1
You would need to call the function to add a new row (for instance calling it from inside a loop or any other part of the code).
You can also use df.append() and other methods, here are some links that could be useful for your use case:
Merge, join, concatenate and compare (Pandas.pydata.org)
Example of using pd.append() (Pandas.pydata.org)

Related

Append std,mean columns to a DataFrame with a for-loop

I want to put the std and mean of a specific column of a dataframe for different days in a new dataframe. (The data comes from analyses conducted on big data in multiple excel files.)
I use a for-loop and append(), but it returns the last ones, not the whole.
here is my code:
hh = ['01:00','02:00','03:00','04:00','05:00']
for j in hh:
month = 1
hour = j
data = get_data(month, hour) ## it works correctly, reads individual Excel spreadsheet
data = pd.DataFrame(data,columns=['Flowday','Interval','Demand','Losses (MWh)','Total Load (MWh)'])
s_td = data.iloc[:,4].std()
meean = data.iloc[:,4].mean()
final = pd.DataFrame(columns=['Month','Hour','standard deviation','average'])
final.append({'Month':j ,'Hour':j,'standard deviation':s_td,'average':meean},ignore_index=True)
I am not sure, but I believe you should assign the final.append(... to a variable:
final = final.append({'Month':j ,'Hour':j,'standard deviation':x,'average':y},ignore_index=True)
Update
If time efficiency is of interest to you, it is suggested to use a list of your desired values ({'Month':j ,'Hour':j,'standard deviation':x,'average':y}), and assign this list to the dataframe. It is said it has better performance.(Thanks to #stefan_aus_hannover)
This is what I am referring to in the comments on Amirhossein's answer:
hh=['01:00','02:00','03:00','04:00','05:00']
lister = []
final = pd.DataFrame(columns=['Month','Hour','standard deviation','average'])
for j in hh:``
month=1
hour = j
data = get_data(month, hour) ## it works correctly
data=pd.DataFrame(data,columns=['Flowday','Interval','Demand','Losses (MWh)','Total Load (MWh)'])
s_td=data.iloc[:,4].std()
meean=data.iloc[:,4].mean()
lister.append({'Month':j ,'Hour':j,'standard deviation':s_td,'average':meean})
final = final.append(pd.DataFrame(lister),ignore_index=True)
Conceptually you're just doing aggregate by hour, with the two functions std, mean; then appending that to your result dataframe. Something like the following; I'll revise it if you give us reproducible input data. Note the .agg/.aggregate() function accepts a dict of {'result_col': aggregating_function} and allows you to pass multiple aggregating functions, and directly name their result column, so no need to declare temporaries. If you only care about aggregating column 4 ('Total Load (MWh)'), then no need to read in columns 0..3.
for hour in hh:
# Read in columns-of-interest from individual Excel sheet for this month and day...
data = get_data(1, hour)
data = pd.DataFrame(data,columns=['Flowday','Interval','Demand','Losses (MWh)','Total Load (MWh)'])
# Compute corresponding row of the aggregate...
dat_hh_aggregate = pd.DataFrame({['Month':whatever ,'Hour':hour]})
dat_hh_aggregate = dat_hh_aggregate.append(data.agg({'standard deviation':pd.Series.std, 'average':pd.Series.mean)})
final = final.append(dat_hh_aggregate, ignore_index=True)
Notes:
pd.read_excel usecols=['Flowday','Interval',...] allows you to avoid reading in columns that you aren't interested in the first place. You haven't supplied reproducible code for get_data(), but you should parameterize it so you can pass the list of columns-of-interest. But you seem to only want to aggregate column 4 ('Total Load (MWh)') anyway.
There's no need to store separate local variables s_td, meean, just directly use .aggregate()
There's no need to have both lister and final. Just have one results dataframe final, and append to it, ignoring the index. (If you get issues with that, post updated code here, make sure it's reproducible)

Adding values to a dataframe column

I have a csv data file that contains data for recordID, duration, src, dst in each row.
I want to label each row(in a new column), with either a 0 or a 1 depending on the output of my algorithm.
I'm currently doing something like this, however, once outputting the DataFrame to a csv file, it deleted all the other, exiting columns.
Another issue is that this solution is extraordinarily slow. I thought of creating a simple array for array and then add that entire array as a new column, but I don't know how to do that either.
df2 = pd.read_csv(f_path2, names=["record ID", "duration_", "src_bytes", "dst_bytes", "label"], header=None)
df2 = df2.dropna()
df2.head()
for source, dest, label in X_test_scaled:
predict = kmeans.predict([[source, dest]])
df2.at[total, 'label'] = predict # total as index
How do I do this correctly - actually update my existing file without rewriting the other columns, and faster?
This is a guess since it is not really clear what your data looks like. But it seems that running kmeans.predict for the entire list at once might speed things up. You could then assign the list of predictions to a column in your dataframe:
df2['label'] = kmeans.predict([[source, dest] for source, dest, label in X_test_scaled])
Your answer isn't precised - to provide solution - what I can conclude with that info:
You can use apply() with loc
In the loc you have access to every row - it's work like iterator of all rows.
Inside predictorFunction - based on another column you can return everything (in this case just execute your predictor)
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.loc.html
def predictorFunction(currentRow):
print(currentRow["record ID"])
//return kmeans.predict([[currentRow["columnNameA"], currentRow["columnNameB"]]])
df2 = df['Predict'].apply(lambda x: func(x))

Python loop through two dataframes and find similar column

I am currently working on a project where my goal is to get the game scores for each NCAA mens basketball game. In order to do this, I need to use the python package sportsreference. I need to use two dataframes, one called df which has the game date and one called box_index (shown below) which has the unique link of each game. I need to get the date column replaced by the unique link of each game. These unique links start with the date (formatted exactly as in the date column of df), which makes it easier to do this with regex or the .contains(). I keep getting a Keyerror: 0 error. Can someone help me figure out what is wrong with my logic below?
from sportsreference.ncaab.schedule import Schedule
def get_team_schedule(name):
combined =Schedule(name).dataframe
box_index = combined["boxscore_index"]
box = box_index.to_frame()
#print(box)
for i in range(len(df)):
for j in range(len(box)):
if box.loc[i,"boxscore_index"].contains(df.loc[i, "date"]):
df.loc[i,"date"] = box.loc[i,"boxscore_index"]
get_team_schedule("Virginia")
It seems like "box" and "df" are pandas data frame, and since you are iterating through all the rows, it may be more efficient to use iterrows (instead of searching by index with ".loc")
for i, row_df in df.iterrows():
for j, row_box in box.iterrows():
if row_box["boxscore_index"].contains(row_df["date"]):
df.at[i, 'date'] = row_box["boxscore_index"]
the ".at" function will overwrite the value at a given cell
Just fyi, iterrows is more efficient than .loc., however itertuples is about 10x faster, and zip about 100xs.
The Keyerror: 0 error is saying you can't get that row at index 0, because there is no index value of 0 using box.loc[i,"boxscore_index"] (the index values are the dates, for example '2020-12-22-14-virginia'). You could use .iloc. though, like box.iloc[i]["boxscore_index"]. You'd have to convert all the .loc to that.
Like the other post said though, I wouldn't go that path. I actually wouldn't even use iterrows here. I would put the box_index into a list, then iterarte through that. Then use pandas to filter your df dataframe. I'm sort of making some assumptions of what df looks like, so if this doesn't work, or not what you looking to do, please share some sample rows of df:
from sportsreference.ncaab.schedule import Schedule
def get_team_schedule(name):
combined = Schedule(name).dataframe
box_index_list = list(combined["boxscore_index"])
for box_index in box_index_list:
temp_game_data = df[df["date"] == boxscore_index]
print(box_index)
print(temp_game_data,'\n')
get_team_schedule("Virginia")

How to maintain lexsort status when adding to a multi-indexed DataFrame?

Say I construct a dataframe with pandas, having multi-indexed columns:
mi = pd.MultiIndex.from_product([['trial_1', 'trial_2', 'trial_3'], ['motor_neuron','afferent_neuron','interneuron'], ['time','voltage','calcium']])
ind = np.arange(1,11)
df = pd.DataFrame(np.random.randn(10,27),index=ind, columns=mi)
Link to image of output dataframe
Say I want only the voltage data from trial 1. I know that the following code fails, because the indices are not sorted lexically:
idx = pd.IndexSlice
df.loc[:,idx['trial_1',:,'voltage']]
As explained in another post, the solution is to sort the dataframe's indices, which works as expected:
dfSorted = df.sortlevel(axis=1)
dfSorted.loc[:,idx['trial_1',:,'voltage']]
I understand why this is necessary. However, say I want to add a new column:
dfSorted.loc[:,('trial_1','interneuron','scaledTime')] = 100 * dfSorted.loc[:,('trial_1','interneuron','time')]
Now dfSorted is not sorted anymore, since the new column was tacked onto the end, rather than snuggled into order. Again, I have to call sortlevel before selecting multiple columns.
I feel this makes for repetitive, bug-prone code, especially when adding lots of columns to the much bigger dataframe in my own project. Is there a (preferably clean-looking) way of inserting new columns in lexical order without having to call sortlevel over and over again?
One approach would be to use filter which does a text filter on the column names:
In [117]: df['trial_1'].filter(like='voltage')
Out[117]:
motor_neuron afferent_neuron interneuron
voltage voltage voltage
1 -0.548699 0.986121 -1.339783
2 -1.320589 -0.509410 -0.529686

iteratively read (tsv) file for Pandas DataFrame

I have some experimental data which looks like this - http://paste2.org/YzJL4e1b (too long to post here). The blocks which are separated by field name lines are different trials of the same experiment - I would like to read everything in a pandas dataframe but have it bin together certain trials (for instance 0,1,6,7 taken together - and 2,3,4,5 taken together in another group). This is because different trials have slightly different conditions and I would like to analyze the results difference between these conditions. I have a list of numbers for different conditions from another file.
Currently I am doing this:
tracker_data = pd.DataFrame
tracker_data = tracker_data.from_csv(bhpath+i+'_wmet.tsv', sep='\t', header=4)
tracker_data['GazePointXLeft'] = tracker_data['GazePointXLeft'].astype(np.float64)
but this of course just reads everything in one go (including the field name lines) - it would be great if I could nest the blocks somehow which allows me to easily access them via numeric indices...
Do you have any ideas how I could best do this?
You should use read_csv rather than from_csv*:
tracker_data = pd.read_csv(bhpath+i+'_wmet.tsv', sep='\t', header=4)
If you want to join a list of DataFrames like this you could use concat:
trackers = (pd.read_csv(bhpath+i+'_wmet.tsv', sep='\t', header=4) for i in range(?))
df = pd.concat(trackers)
* which I think is deprecated.
I haven't quite got it working, but I think that's because of how I copy/pasted the data. Try this, let me know if it doesn't work.
Using some inspiration from this question
pat = "TimeStamp\tGazePointXLeft\tGazePointYLeft\tValidityLeft\tGazePointXRight\tGazePointYRight\tValidityRight\tGazePointX\tGazePointY\tEvent\n"
with open('rec.txt') as infile:
header, names, tail = infile.read().partition(pat)
names = names.split() # get rid of the tabs here
all_data = tail.split(pat)
res = [pd.read_csv(StringIO(x), sep='\t', names=names) for x in all_data]
We read in the whole file so this won't work for huge files, and then partition it based on the known line giving the column names. tail is just a string with the rest of the data so we can split that, again based on the names. There may be a better way than using StringIO, but this should work.
I'm note sure how you want to join the separate blocks together, but this leaves them as a list. You can concat from there however you desire.
For larger files you might want to write a generator to read until you hit the column names and write a new file until you hit them again. Then read those in separately using something like Andy's answer.
A separate question from how to work with the multiple blocks. Assuming you've got the list of Dataframes, which I've called res, you can use pandas' concat to join them together into a single DataFrame with a MultiIndex (also see the link Andy posted).
In [122]: df = pd.concat(res, axis=1, keys=['a', 'b', 'c']) # Use whatever makes sense for the keys
In [123]: df.xs('TimeStamp', level=1, axis=1)
Out[123]:
a b c
0 NaN NaN NaN
1 0.0 0.0 0.0
2 3.3 3.3 3.3
3 6.6 6.6 6.6
I ended up doing it iteratively. very very iteratively. Nothing else seems to work.
pat = 'TimeStamp GazePointXLeft GazePointYLeft ValidityLeft GazePointXRight GazePointYRight ValidityRight GazePointX GazePointY Event'
with open(bhpath+fileid+'_wmet.tsv') as infile:
eye_data = infile.read().split(pat)
eye_data = [trial.split('\r\n') for trial in eye_data] # split at '\r'
for idx, trial in enumerate(eye_data):
trial = [row.split('\t') for row in trial]
eye_data[idx] = trial

Categories