Here is what i got (time series) in pandas dataframe
screenshot
(also dates were converted from timestamps)
My goal is to plot not only observations, but all the range of dates. I need to see horizontal line or gap when there is no new observations.
Dealing with data that is not observed equidistant in time is a typical challenge with real-world time series data. Given your problem, this code should work.
from datetime import datetime
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
# sample Frame
df = pd.DataFrame({'time' : ['2022,7,3,0,1,21', '2022,7,3,0,2,47', '2022,7,3,0,2,47', '2022,7,3,0,5,5',
'2022,7,3,0,5,5'],
'balance' : [12.6, 12.54, 12.494426, 12.482481, 12.449206]})
df['time'] = pd.to_datetime(df['time'], format='%Y,%m,%d,%H,%M,%S')
# aggregate time duplicates by mean
df = df.groupby('time').mean()
df.reset_index(inplace=True)
# pick equidistant time grid
df_new = pd.DataFrame({'time' : pd.date_range(start=df.loc[0]['time'], end=df.loc[2]['time'], freq='S')})
df = pd.merge(left=df_new, right=df, on='time', how='left')
# fill nan
df['balance'].fillna(method='pad', inplace=True)
df.set_index("time", inplace=True)
# plot
_ = df.plot(title='Time Series of Balance')
There are several caveats to this solution.
First, your data has a high temporal resolution (seconds). However, there are hours-long gaps in between observations. You either coarsen the timestamp by rounding (e.g. to minutes or hours) or go along with the time series on a second-by-second resolution and accept the fact that most you balance values will be filled-in values rather than true observations.
Second, you have different balance values for the same timestamp which indicates faulty entries or a misspecified timestamp. I unified those entries via grouping by timestamp and averaged the balance over those non-unique timestamps.
Third, filled-up gaps and true observations both have the same visual representation in the plot (blue dots in the graph). As previously mentioned commenting out the fillna() line would only showcase true observations leaving everything in between white.
Finally, the missing values are merely filled in via padding. Look up different values of the argument method in the documentation in case you want to linearly interpolate etc.
Summary
The problems described above are typical for event-driven time series data. Since you deal with a (financial) balance that constitutes a state that is only changed by events (orders), I believe that the assumptions made above arew reasonable and can be adjusted easily for your or many other use cases.
this helped
data = data.set_index('time').resample('1M').mean()
Related
I have got the below plot of temperature in a time series dates aggregated hourly.
What I am trying to do is to interpolate the missing values between 2019 and 2020, using pandas pd.interpolate, and generate results hourly (same frequency as the rest of the data in weather_data). My data is called weather_data, the index column is called date_time (dtype is float64) and the temperature column has also got float64 as the dtype. Here is what I have tried:
test = weather_datetime_index.temperature.interpolate("cubicspline")
test.plot()
This gave the same plot. I also tried (based on this post):
interpolated_temp = weather_datetime_index["temperature"].astype(float).interpolate(method="time")
still gave the same plot.
I also tried (as per this post):
test = weather_datetime_index.temperature.interpolate("spline",limit_direction="forward", order=1)
test.plot()
but still gave me the same plot.
How can I interpolate this data using pd.interpolate?
I am trying to plot random rows in a dataset, where the data consists of data collated across different dates. I have plotted it in such a way that the x-axis is labelled for the specific dates, and there is no interpolation between dates.
The issue I am having, is that the values plotted by matplotlib, do not match the entry values in the dataset. I am unsure as to what is happening here, would anyone be able to provide some insight, and possibly as to how I would fix it?
I have attached an image of the dataset and the plot, with the code contained below.
The code for generating the x-ticks, is as follows:
In: #creating a flat dates object such that dates are integer objects
flat_Dates_dates = flat_Dates[2:7]
flat_Dates_dates
Out: [20220620, 20220624, 20220627, 20220701, 20220708]
In: #creating datetime object(pandas, not datetime module) to only plot specific dates and remove interpolation of dates
date_obj_pd = pd.to_datetime(flat_Dates_dates, format=("%Y%m%d"))
Out: DatetimeIndex(['2022-06-20', '2022-06-24', '2022-06-27', '2022-07-01',
'2022-07-08'],
dtype='datetime64[ns]', freq=None)
As you can see from the dataset, the plotted trends should not take that form, the data values are wildly different from where they should be on the graph.
Edit: Apologies, I forgot to mention x = date_obj_pd - which is why I added the code, essentially just the array of datetime objects.
y is just the name of the pandas DataFrame (data table) I have included in the image.
You are plotting columns instead of rows. The blue line contains elements 1:7 from the first column, namely these:
If you transpose the dataframe you should get the desired result:
plt.plot(x, y[1:7].transpose(), 'o--')
I'm working through the pangeo tutorial gallery and am stuck on the ENSO exercise at the end of xarray
you'll need to download some files:
%%bash
git clone https://github.com/pangeo-data/tutorial-data.git
Then:
import numpy as np
import xarray as xr
import pandas as pd
# load all files
ds_all = xr.open_mfdataset('./tutorial-data/sst/*nc', combine='by_coords')
# slice for enso3.4 region
sst_enso = ds_all.sel(lat=slice(-5,5), lon=slice(-170+360,-120+360))
# subset years to match hint at the bottom
sst_enso = sst_enso.sel(time=sst_enso.time.dt.year>=1982)
# groupby each timepoint and find mean for entire spatial region
avg_enso = sst_enso.sst.groupby('time').mean(dim=['lat','lon'])
avg_enso.plot()
This figure matches that shown at the bottom of the tutorial. so far so good, but i'd like to compute and plot ONI as well. Warm or cold phases of the Oceanic Nino Index are defined by a five consecutive 3-month running mean of sea surface temperature (SST) anomalies in the Niño 3.4 region that is above (below) the threshold of +0.5°C (-0.5°C). This is known as the Oceanic Niño Index (ONI).
I run into trouble because the month becomes an index.
Q1. I'm not sure how to make sure that subtracting sst_enso - enso_clim results in the correct math.
Assuming that is correct, I can compute the regional mean anomaly again and then use a rolling window mean.
enso_clim = sst_enso.sst.groupby('time.month').mean('time')
sst_anom = sst_enso - enso_clim
enso_anom = sst_anom.groupby('time').mean(dim=['lat','lon'])
oni = enso_anom.rolling(time = 3).mean()
Now I'd like to plot a bar chart of oni with positive red, negative blue. Something like this:
for exaample with:
oni.sst.plot.bar(color=(oni.sst < 0).map({True: 'b', False: 'r'}))
Instead oni.sst.plot() gives me:
Resetting the index enso_anom.reset_index('month', drop=True).sst still keeps month as a dimension and gives the same plot. If you drop_dims('month') then the sst data goes away.
I also tried converting to a pd with oni.to_dataframe() but you end up with 5040 rows which is 12 months x 420 month-years I subsetted for. According to the docs "The DataFrame is indexed by the Cartesian product of index coordinates (in the form of a pandas.MultiIndex)." so I guess that makes sense, but not useful. Even if you reset_index of oni before converting to a dataframe you get the same 5040 rows. Q2. Since the dataframe must be repeating itself I can probably figure out where, but is there a way to do this "cleaner" with each date not repeated for all 12 months?
Your code results into an DataArray with the dimensions time and month due to the
re-chunking. This is the reason why you end up with such a plot.
There is a trick (found here) to calculate anomalies. Besides this I would select as a reference period 1986-2015 ( see NOAA definition for ONI-index).
Combining both I ended up in this short code (without the bar plots):
import xarray as xr
import pandas as pd
import matplotlib.pyplot as plt
# load all files
ds_all = xr.open_mfdataset('./tutorial-data/sst/*nc', combine='by_coords')
# slice for enso3.4 region
sst_enso = ds_all.sel(lat=slice(-5,5), lon=slice(-170+360,-120+360))
avg_enso = sst_enso.sst.groupby('time').mean(dim=['lat','lon'])
avg_enso.plot()
ds = sst_enso.sst.mean(dim=['lat','lon'])
enso_clim = ds.sel(time=slice('1986-01-01', '2016-01-01')).groupby("time.month").mean("time")
# ref: https://origin.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ONI_change.shtml
enso_anom = ds.groupby("time.month") - enso_clim
# ref: http://xarray.pydata.org/en/stable/examples/weather-data.html#Calculate-monthly-anomalies
enso_anom.plot()
oni = enso_anom.rolling(time = 3).mean()
oni.plot()
I have a situation with a bunch of datafiles, these datafiles have a number of samples in a given time frame that depends on the system. i.e. At time t=1 for instance I might have a file with 10 items, or 20 items, at later times in that file I will always have the same number of items. The format is time, x, y, z in columns, and loaded into a numpy array. The time values show which frame, but as mentioned there's always the same, let's go with 10 as a sample. So I'll have a (10,4) numpy array where the time values are identical, but there are many frames in the file, so lets say 100 frames, so really I have (1000,4). I want to plot the data with time on the x-axis and manipulations of the other data on the y, but I am unsure how to do this with line plot methods in matplotlib. Normally to provide both x,y values I believe I need to do a scatter plot, so I'm hoping there's a better way to do this. What I ideally want is to treat each line that has the same time code as a different series (so it will colour differently), and the next bit of data for that same line number in the next frame (time value) will be labelled the same colour, giving those good contiguous lines. We can look at the time column and figure out how many items share a time code, let's call it "n". Sample code:
a = numpy.loadtxt('sampledata.txt')
plt.plot(a[:0,:,n],a[:1,:1])
plt.show()
I think this code expresses what I'm going for, though it doesn't work.
Edit:
I hope this is what you wanted.
seaborn scatterplot can categorize data to some groups which have the same codes (time code in this case) and use the same colors to them.
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv(r"E:\Programming\Python\Matplotlib\timecodes.csv",
names=["time","x","y","z","code"]) #use your file
df["time"]=pd.to_datetime(df["time"]) #recognize the data as Time
df["x"]=df["time"].dt.day # I changed the data into "Date only" and imported to x column. Easier to see on graph.
#just used random numbers in y and z in my data.
sns.scatterplot("x", "y", data = df, hue = "code") #hue does the grouping
plt.show()
I used csv file here but you can do to your text file as well by adding sep="\t" in the argument. I also added a code in the file. If you have it the code can group the data in the graph, so you don't have to separate or make a hierarchical index. If you want to change colors or grouping please see seaborn website.
Hope this helps.
Alternative, the method I used, but Tim's answer is still accurate as well. Since the time codes are not date/time information I modified my own code to add tags as a second column I call "p" (they're polymers).
import numpy as np
import pandas as pd
datain = np.loadtxt('somefile.txt')
df = pd.DataFrame(data = datain, columns = ["t","p","x","y","z"])
ax = sns.scatterplot("t","x", data = df, hue = "p")
plt.show()
And of course the other columns can be plotted similarly if desired.
I am having a pandas dataframe with several of speed values which is continuously moving values, but its a sensor data, so we often get the errors in the middle at some points the moving average seems to be not helping also, so what methods can I use to remove these outliers or peak points from the data?
Example:
data points = {0.5,0.5,0.7,0.6,0.5,0.7,0.5,0.4,0.6,4,0.5,0.5,4,5,6,0.4,0.7,0.8,0.9}
in this data If I see the points 4, 4, 5, 6 are completely outlier values,
before I have used the rolling mean with 5 min of window frame to smooth these values but still I am getting these type of a lot of blip points, which I want to remove, can any one suggest me any technique to get rid of these points.
I have an image which is more clear view of data:
if you see here how the data is showing some outlier points which I have to remove?
any Idea whats the possible way to get rid of these points?
I really think z-score using scipy.stats.zscore() is the way to go here. Have a look at the related issue in this post. There they are focusing on which method to use before removing potential outliers. As I see it, your challenge is a bit simpler, since judging by the data provided, it would be pretty straight forward to identify potential outliers without having to transform the data. Below is a code snippet that does just that. Just remember though, that what does and does not look like outliers will depend entirely on your dataset. And after removing some outliers, what has not looked like an outlier before, suddenly will do so now. Have a look:
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy import stats
# your data (as a list)
data = [0.5,0.5,0.7,0.6,0.5,0.7,0.5,0.4,0.6,4,0.5,0.5,4,5,6,0.4,0.7,0.8,0.9]
# initial plot
df1 = pd.DataFrame(data = data)
df1.columns = ['data']
df1.plot(style = 'o')
# Function to identify and remove outliers
def outliers(df, level):
# 1. temporary dataframe
df = df1.copy(deep = True)
# 2. Select a level for a Z-score to identify and remove outliers
df_Z = df[(np.abs(stats.zscore(df)) < level).all(axis=1)]
ix_keep = df_Z.index
# 3. Subset the raw dataframe with the indexes you'd like to keep
df_keep = df.loc[ix_keep]
return(df_keep)
Originial data:
Test run 1 : Z-score = 4:
As you can see, no data has been removed because the level was set too high.
Test run 2 : Z-score = 2:
Now we're getting somewhere. Two outliers have been removed, but there is still some dubious data left.
Test run 3 : Z-score = 1.2:
This is looking really good. The remaining data now seems to be a bit more evenly distributed than before. But now the data point highlighted by the original datapoint is starting to look a bit like a potential outlier. So where to stop? That's going to be entirely up to you!
EDIT: Here's the whole thing for an easy copy&paste:
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from scipy import stats
# your data (as a list)
data = [0.5,0.5,0.7,0.6,0.5,0.7,0.5,0.4,0.6,4,0.5,0.5,4,5,6,0.4,0.7,0.8,0.9]
# initial plot
df1 = pd.DataFrame(data = data)
df1.columns = ['data']
df1.plot(style = 'o')
# Function to identify and remove outliers
def outliers(df, level):
# 1. temporary dataframe
df = df1.copy(deep = True)
# 2. Select a level for a Z-score to identify and remove outliers
df_Z = df[(np.abs(stats.zscore(df)) < level).all(axis=1)]
ix_keep = df_Z.index
# 3. Subset the raw dataframe with the indexes you'd like to keep
df_keep = df.loc[ix_keep]
return(df_keep)
# remove outliers
level = 1.2
print("df_clean = outliers(df = df1, level = " + str(level)+')')
df_clean = outliers(df = df1, level = level)
# final plot
df_clean.plot(style = 'o')
You might cut values above a certain quantile as follows:
import numpy as np
clean_data=np.array(data_points)[(data_points<=np.percentile(data_points, 95))]
In pandas you would use df.quantile, you can find it here
Or you may use the Q3+1.5*IQR approach to eliminate the outliers, like you would do through a boxplot