I am trying to calculate the column values of Cat "recursively"
Every loop should calculate the Cat columns max value (Catz) of a group of x. If the Date range becomes <=60, Cat column value should be updated with Catz +=1. I got an arcpy of this process going. I, however, have thousands of other data sets outside that need not be converted in arcpy friendly format. I am not well familiar with pandas.
Made reference to [1]: Calculate DataFrame values recursively and [2]: python pandas- apply function with two arguments to columns . I still havent quite understood the Series/Dataframe Concept and how to apply either outcome
import pandas as pd
import numpy as np
from datetime import datetime
from datetime import datetime as dt
from datetime import timedelta
import time
from datetime import date
dict = {'x':["ASPELBJNMI", "JUNRNEXCRG", "ASPELBJNMI", "JUNRNEXCRG"],
'start': ["6/27/2018", "8/4/2018", "8/22/2018", "8/12/2018"],
'finish':["8/11/2018", "10/3/2018", "8/31/2018", "10/26/2018"],
'DateRange':[0,0,0,0],
'Cat':[-1,-1,-1,-1],
'ID':[1,2,3,4]}
df = pd.DataFrame(dict)
df.set_index('ID')
def classd(houp):
Catz = houp.Cat.min()
Catz +=1
houp = houp.groupby('x')
for x, houp2 in houp:
houp.DateRange = (pd.to_datetime(houp.finish.loc[:]).min()- houp.start.loc[:]).astype('timedelta64[D]')
houp.Cat = np.where(houp.DateRange<=60, Catz , -1)
return houp
df['Cat'] = df[['x','DateRange','Cat']].apply(classd, axis=1).Cat
print df
I get the following Traceback when I run my code
Catz = houp.Cat.min()
AttributeError: ("'long' object has no attribute 'min'", u'occurred at index 0')
Desired outcome
OBJECTID_1 * Conc * ID start finish DateRange Cat
1 ASPELBJNMI LAPMT 6/27/2018 8/11/2018 45 0
2 ASPELBJNMI KLKIY 8/22/2018 8/31/2018 9 1
15 JUNRNEXCRG CGCHK 8/4/2018 10/3/2018 60 1
16 JUNRNEXCRG IQYGJ 8/12/2018 10/26/2018 83 -1
You program is little bit complecated to comprehend
But i would suggest to try something simple with apply function:
s.apply(lambda x: x ** 2)
here s is a series
https://pandas.pydata.org/docs/reference/api/pandas.Series.apply.html
Related
Essentially I have a csv file which has an OFFENCE_CODE column and a column with some dates called OFFENCE_MONTH. The code I have provided retrieves the 10 most frequently occuring offence codes within the OFFENCE_CODE column, however I need to be able to do this between 2 dates from the OFFENCE_MONTH column.
import numpy as np
import pandas as pd
input_date1 = 2012/11/1
input_date2 = 2013/11/1
df = pd.read_csv("penalty_data_set.csv", dtype='unicode', usecols=['OFFENCE_CODE', 'OFFENCE_MONTH'])
print(df['OFFENCE_CODE'].value_counts().nlargest(10))
You can use pandas.Series.between :
df['OFFENCE_MONTH'] = pd.to_datetime(df['OFFENCE_MONTH'])
input_date1 = pd.to_datetime('2012/11/1')
input_date2 = pd.to_datetime('2013/11/1')
m = df['OFFENCE_MONTH'].between(input_date1, input_date2)
df.loc[m, 'OFFENCE_CODE'].value_counts().nlargest(10)
You can do this if it is per month:
import pandas as pd
input_date1 = 2012/11/1
input_date2 = 2013/11/1
# example dataframe
# df = pd.read_csv("penalty_data_set.csv", dtype='unicode', usecols=['OFFENCE_CODE', 'OFFENCE_MONTH'])
d = {'OFFENCE_MONTH':[1,1,1,2,3,4,4,5,6,12],
'OFFENCE_CODE':['a','a','b','d','r','e','f','g','h','a']}
df = pd.DataFrame(d)
print(df)
# make a filter (example here)
df_filter = df.loc[(df['OFFENCE_MONTH']>=1) & (df['OFFENCE_MONTH']<5)]
print(df_filter)
# arrange the filter
print(df_filter['OFFENCE_CODE'].value_counts().nlargest(10))
example result:
a 2
b 1
d 1
r 1
e 1
f 1
First you need to convert the date in OFFENCE_MONTH column to datetime :
from datetime import datetime
datetime.strptime(input_date1, "%Y-%m-%d")
datetime.strptime(input_date2, "%Y-%m-%d")
datetime.strptime(df['OFFENCE_MONTH'], "%Y-%m-%d")
Then Selecting rows based on your conditions:
rslt_df = df[df['OFFENCE_MONTH'] >= input_date1 and df['OFFENCE_MONTH'] <= input_date2]
print(rslt_df['OFFENCE_CODE'].value_counts().nlargest(10))
I am trying to recover the original date or index from grouping a times series with a datetime index by year. Is there a faster way without a loop and an extra column to obtain first_day_indices
import pandas as pd
import numpy as np
import datetime as dt
# Data
T = 1000
base = dt.date.today()
date_list = [base - dt.timedelta(weeks=x) for x in range(T)]
date_list.reverse()
test_data = pd.DataFrame(np.random.randn(T)/100, columns=['Col1'])
test_data.index = pd.to_datetime(date_list)
test_data['date'] = test_data.index
first_days = test_data['date'].groupby(test_data.index.year).first()
first_day_indices= []
for i in first_days:
first_day_indices.append(np.where(test_data.index == i)[0][0])
print(first_day_indices)
You can use pandas.Series.isin to check whether elements in Series are contained in a list of values.
test_data.reset_index()[test_data.index.isin(first_days)].index.tolist()
I canĀ“t see the result...
My result is 0 and it should be 824
import pandas as pd
apple = r'C:\Users\User\Downloads\AAPL.xlsx'
data = pd.read_excel(apple)
dateindextime = data.set_index("timestamp")
rango = dateindextime.loc["2011-08-20":"2008-05-15"]
print(len(rango))
If I do
print(rango)
output:
Empty DataFrame Columns: [open, high, low, close, adjusted_close, volume] Index: []
Kinda hard to tell without the AAPL.xlsx dataset, but I'm guessing you will need to convert the "timestamp" column to a datetime object first using pd.to_datetime. From there you would slice on the datetime object vs slicing on a string, which is what you were doing below. If you posted the AAPL.xlsx dataset, I could dig deeper.
import pandas as pd
import datetime
apple = r'C:\Users\User\Downloads\AAPL.xlsx'
data = pd.read_excel(apple)
data["datetime_timestamp"] = pd.to_datetime(data["timestamp"], infer_datetime_format=True)
dateindextime = data.set_index("datetime_timestamp")
ti = datetime.date(2008,5,15)
tf = datetime.date(2011,8,20)
rango = dateindextime.loc[ti:tf]
print(len(rango))
I have this code where I wish to change the dataformat. But I only manage to change one line and not the whole dataset.
Code:
import pandas as pd
df = pd.read_csv ("data_q_3.csv")
result = df.groupby ("Country/Region").max().sort_values(by='Confirmed', ascending=False)[:10]
pd.set_option('display.max_column', None)
print ("Covid 19 top 10 countries based on confirmed case:")
print(result)
from datetime import datetime
datetime.fromisoformat("2020-03-18T12:13:09").strftime("%Y-%m-%d-%H:%M")
Does anyone know how to fit the code so that the datetime changes in the whole dataset?
Thanks!
After looking at your problem for a while, I figured out how to change the values in the 'DateTime' column. The only problem that may arise is if the 'Country/Region' column has duplicate location names.
Editing the time is simple, as all you have to do is make use of pythons slicing. You can slice a string by typing
string = 'abcdefghijklnmopqrstuvwxyz'
print(string[0:5])
which will result in abcdef.
Below is the finished code.
import pandas as pd
# read unknown data
df = pd.read_csv("data_q_3.csv")
# List of unknown data
result = df.groupby("Country/Region").max().sort_values(by='Confirmed', ascending=False)[:10]
pd.set_option('display.max_column', None)
# you need a for loop to go through the whole column
for row in result.index:
# get the current stored time
time = result.at[row, 'DateTime']
# reformat the time string by slicing the
# string from index 0 to 10, and from index 12 to 16
# and putting a dash in the middle
time = time[0:10] + "-" + time[12:16]
# store the new time in the result
result.at[row, 'DateTime'] = time
#print result
print ("Covid 19 top 10 countries based on confirmed case:")
print(result)
I have a large temperature time series that I'm performing some functions on. I'm taking hourly observations and creating daily statistics. After I'm done with my calculations, I want to use the grouped year and Julian days that are objects in the Groupby ('aa' below) and the drangeT and drangeHI arrays that come out and make an entirely new DataFrame with those variables. Code is below:
import numpy as np
import scipy.stats as st
import pandas as pd
city = ['BUF']#,'PIT','CIN','CHI','STL','MSP','DET']
mons = np.arange(5,11,1)
for a in city:
data = 'H:/Classwork/GEOG612/Project/'+a+'Data_cut.txt'
df = pd.read_table(data,sep='\t')
df['TempF'] = ((9./5.)*df['TempC'])+32.
df1 = df.loc[df['Month'].isin(mons)]
aa = df1.groupby(['Year','Julian'],as_index=False)
maxT = aa.aggregate({'TempF':np.max})
minT = aa.aggregate({'TempF':np.min})
maxHI = aa.aggregate({'HeatIndex':np.max})
minHI = aa.aggregate({'HeatIndex':np.min})
drangeT = maxT - minT
drangeHI = maxHI - minHI
df2 = pd.DataFrame(data = {'Year':aa.Year,'Day':aa.Julian,'TRange':drangeT,'HIRange':drangeHI})
All variables in the df2 command are of length 8250, but I get this error message when I run the it:
ValueError: cannot copy sequence with size 3 to array axis with dimension 8250
Any suggestions are welcomed and appreciated. Thanks!