Appending DataFrame to List in Pandas, Python - python

I have a a file of data and want to select a specific State. From there I need to return this in a list, but there will be years that correspond to the date with missing data, so I need to replace the missing data.
I am having some issue with my code, likely something is slightly off in my for loop:
def stateCountAsList(filepath,state):
import pandas as pd
pd.set_option('display.width',200)
import numpy as np
dataFrame = pd.read_csv(filepath,header=0,sep='\t')
df = dataFrame.iloc[0:638,:]
dfState = df[df['State'] == state]
yearList = range(1999,2012)
countsList = []
for dfState['Year'] in yearList:
countsList = dfState['Count']
else:
countsList.append(np.nan)
return countsList
print countsList.tolist()
stateCountAsList(filepath, state)
state = 'California'
Traceback:
C:\Users\Michael\workspace\UCIIntrotoPythonDA\src\Michael_Madani_week3.py:59: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
for dfState['Year'] in yearList:
Traceback (most recent call last):
File "C:\Users\Michael\workspace\UCIIntrotoPythonDA\src\Michael_Madani_week3.py", line 67, in <module>
stateCountAsList(filepath, state)
File "C:\Users\Michael\workspace\UCIIntrotoPythonDA\src\Michael_Madani_week3.py", line 62, in stateCountAsList
countsList.append(np.nan)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\core\series.py", line 1466, in append
verify_integrity=verify_integrity)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\tools\merge.py", line 754, in concat
copy=copy)
File "C:\Users\Michael\Anaconda\lib\site-packages\pandas\tools\merge.py", line 805, in __init__
raise TypeError("cannot concatenate a non-NDFrame object")
TypeError: cannot concatenate a non-NDFrame object

You have at least two different issues in your code:
The warning
A value is trying to be set on a copy of a slice from a DataFrame.
is triggered by for dfState['Year'] in yearList (line 59 in your code). In this line you try to loop over a range of years (1999 to 2012), but instead you implicitely try to assign the year value to dfState['Year']. This is not a copy, but a "view" (http://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy), since df = dataFrame.iloc[0:638,:] returns a view.
But as mentioned earlier, you don't want to assign a value to the DataFrame here, only loop over years. So the for-loop should look like:
for year in range(1999,2012):
...
The second issue is in line 62. Here, you try to append np.nan to your "list" countsList - but countsList is not a list anymore, but a DataFrame!
Two lines before, you assign a pd.Series (countsList = dfState['Count']), effectively changing the type. This gives you the TypeError: cannot concatenate a non-NDFrame object
With this information you should be able to correct your loop.
As an alternative, you can get the desired result using Pandas query method (http://pandas.pydata.org/pandas-docs/stable/indexing.html#the-query-method-experimental):
def stateCountAsList(filepath,state):
import pandas as pd
import numpy as np
dataFrame = pd.read_csv(filepath,header=0,sep='\t')
df = dataFrame.iloc[0:638,:]
stateList = df.query("(State == #state) & (Year > 1999 < 2005").Count.tolist()
return stateList

Related

how can I make a for loop to populate a DataFrame?

and from the begining I thanks everyone that seeks to help.
I have started to learn python and came across a opportunity to use python to my advantage at work
Im basically made a script that reads a google sheets file, import it into pandas and cleaned up the data.
In the end, I just wanna have the name of the agents in the columns and all of their values for resolucao column below them so I can take the average amount of time for all of the agentes, but I'm struggling to make it with a list comprehension / for loop.
This is what the DataFrame looks like after I cleaned it up
And this is the Code that I tried to Run
PS: Sorry for the messy code.
agentes_unique = list(df['Agente'].unique())
agentes_duplicated = df['Agente']
value_resolucao_duplicated = df['resolucao']
n_of_rows = []
for row in range(len(df)):
n_of_rows.append(row)
i = 0
while n_of_rows[i] < len(n_of_rows):
df2 = pd.DataFrame({agentes_unique[i]: (value for value in df['resolucao'][i] if df['Agente'][i] == agentes_unique[i])})
i+= 1
df2.to_excel('teste.xlsx',index = True, header = True)
But in the end it came to this error:
Traceback (most recent call last):
File "C:\Users\FELIPE\Desktop\Python\webscraping\.venv\lib\site-packages\pandas\core\indexes\range.py", line 385, in get_loc
return self._range.index(new_key)
ValueError: 0 is not in range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\FELIPE\Desktop\Python\webscraping\bot_csv_extract\bot_login\main.py", line 50, in <module>
df2 = pd.DataFrame({'Agente': (valor for valor in df['resolucao'][i] if df['Agente'][i] == 'Gabriel')})
File "C:\Users\FELIPE\Desktop\Python\webscraping\.venv\lib\site-packages\pandas\core\series.py", line 958, in __getitem__
return self._get_value(key)
File "C:\Users\FELIPE\Desktop\Python\webscraping\.venv\lib\site-packages\pandas\core\series.py", line 1069, in _get_value
loc = self.index.get_loc(label)
File "C:\Users\FELIPE\Desktop\Python\webscraping\.venv\lib\site-packages\pandas\core\indexes\range.py", line 387, in get_loc
raise KeyError(key) from err
KeyError: 0
I feel like I'm making some obvious mistake but I cant fix it
Again, thanks to anyone who tries to help
Are you looking to do something like this? This is just sample data, but a good start for what you are looking to do if I understand what your wanting to do.
data = {
'Column1' : ['Data', 'Another_Data', 'More_Data', 'Last_Data'],
'Agente' : ['Name', 'Another_Name', 'More_Names', 'Last_Name'],
'Data' : [1, 2, 3, 4]
}
df = pd.DataFrame(data)
df = df.pivot(index = 'Column1', columns=['Agente'], values = 'Data')
df.reset_index()
It is not recommended to use for loops against pandas DataFrames: It is considered messy and inefficient.
With some practice you will be able to approach problems in such a way that you will not need to use for loops in these cases.
From what I understand, your goal can be realized in 3 simple steps:
1. Select the 2 columns of interest. I recommend you take a look at how to access different elements of a dataFrame:
df = df[["Agent", "resolucao"]]
2. Convert the column you want to average to a numeric value. Say seconds:
df["resolucao"] = pd.to_timedelta(df['resolucao'].astype(str)).dt.total_seconds()
3. Apply an average aggregation, via the groupby() function:
df = df.groupby(["Agente"]).mean().reset_index()
Hope this helps.
For the next time, I also recommend you to not post the database as an image in order to be able to reproduce your code.
Cheers and keep it up!

Python Warning:" SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame" [duplicate]

This question already has answers here:
How to deal with SettingWithCopyWarning in Pandas
(20 answers)
Closed 2 years ago.
I've created this new dataframe to store this 2 columns:
#Create a new data frame
new_df = df[period-1:]
new_df['Buy'] = get_signal(new_df)[0]
new_df['Sell'] = get_signal(new_df)[1]
But gives me this error:
[Command: python -u C:\Users\Nicolò\Documents\Git\ProgettoTradingBot\ProgettoTradeBot\GUIprova2.py]
C:\Users\Nicol�\Documents\Git\ProgettoTradingBot\ProgettoTradeBot\GUIprova2.py:80: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
new_df['Buy'] = get_signal(new_df)[0]
C:\Users\Nicol�\Documents\Git\ProgettoTradingBot\ProgettoTradeBot\GUIprova2.py:81: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
new_df['Sell'] = get_signal(new_df)[1]
Traceback (most recent call last):
File "C:\Users\Nicol�\AppData\Local\Programs\Python\Python38\lib\site-packages\matplotlib\cbook\__init__.py", line 196, in process
func(*args, **kwargs)
File "C:\Users\Nicol�\AppData\Local\Programs\Python\Python38\lib\site-packages\matplotlib\animation.py", line 951, in _start
self._init_draw()
File "C:\Users\Nicol�\AppData\Local\Programs\Python\Python38\lib\site-packages\matplotlib\animation.py", line 1743, in _init_draw
self._draw_frame(next(self.new_frame_seq()))
File "C:\Users\Nicol�\AppData\Local\Programs\Python\Python38\lib\site-packages\matplotlib\animation.py", line 1766, in _draw_frame
self._drawn_artists = self._func(framedata, *self._args)
File "C:\Users\Nicol�\Documents\Git\ProgettoTradingBot\ProgettoTradeBot\GUIprova2.py", line 98, in animate
a.fill_between(x_axis,new_df['Upper'],new_df['Lower'], color = 'grey', alpha= 0.5)
File "C:\Users\Nicol�\AppData\Local\Programs\Python\Python38\lib\site-packages\matplotlib\__init__.py", line 1565, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File "C:\Users\Nicol�\AppData\Local\Programs\Python\Python38\lib\site-packages\matplotlib\axes\_axes.py", line 5158, in fill_between
where = where & ~functools.reduce(np.logical_or,
ValueError: operands could not be broadcast together with shapes (2208,) (2189,)
[Finished in 79.149s]
Here's the complete file:
https://github.com/Raccispini/ProgettoTradeBot/blob/master/GUIprova2.py
How can i copy the get_signal(new_df) array into new_df['Buy'] and new_df['Sell'] without get this warning?
Thanks:)
Edit:
I've done this piece of program in another project before and it works.I don't understand why here doesent' work.
I copy pasted exactly as it was before.
You have created new_df calling:
new_df = df[period-1:]
so new_df is actually a view of the original DataFrame.
Then you attempt to modify it, invoking new_df['Buy'] = ... and
new_df['Sell'] = ..., so this warning occurs.
Maybe you should create new_df as a "independent" DataFrame, with
its own data buffer:
new_df = df[period-1:].copy()
Then you are free to modify its content.

Deleting rows from Python Dataframe with condition

I'm trying to delete some rows from huge dataset in Pandas. I decided to use iterrows() function for searching for indexes to delete (since I know that deleting while iteration is bad idea).
Right now it looks like that:
list_to_delete = []
rows_to_delete = {}
for index, row in train.iterrows():
if <some conditions>:
list_to_delete.append(int(index))
rows_to_delete[int(index)] = row
train = train.drop([train.index[i] for i in list_to_delete])
It's giving me such error:
Traceback (most recent call last):
File "C:/Users/patka/PycharmProjects/PARSER/getStatistics.py", line 115, in <module>
train = train.drop([train.index[i] for i in list_to_delete])
File "C:/Users/patka/PycharmProjects/PARSER/getStatistics.py", line 115, in <listcomp>
train = train.drop([train.index[i] for i in list_to_delete])
File "C:\Users\patka\PycharmProjects\PARSER\venv\lib\site-packages\pandas\core\indexes\base.py", line 3958, in __getitem__
return getitem(key)
IndexError: index 25378 is out of bounds for axis 0 with size 25378
How is it possible?
Before that I created a copy of this dataset and tried to delete chosen rows from this copy while iterating through original one (with inplace=True). Unfortunately there was error saying that NoneType object has no attribute 'drop'.
I would appreciate your help very much.
My example row looks like that:
resolution Done
priority Major
created 2000-07-04T13:13:52.000+0200
status Resolved
Team XBee
changelog {'Team" : {'from':...

Get Pandas DataFrame first column

This question is odd, since I know HOW to do something, but I dont know WHY I cant do it another way.
Suppose simple data frame:
import pandasas pd
a = pd.DataFrame([[0,1], [2,3]])
I can slice this data frame very easily, first column is a[[0]], second is a[[1]]. Simple isnt it?
Now, lets have more complex data frame. This is part of my code:
var_vec = [i for i in range(100)]
num_of_sites = 100
row_names = ["_".join(["loc", str(i)]) for i in
range(1,num_of_sites + 1)]
frame = pd.DataFrame(var_vec, columns = ["Variable"], index = row_names)
spec_ab = [i**3 for i in range(100)]
frame[1] = spec_ab
Data frame frame is also pandas DataFrame, such as a. I canget second column very easily as frame[[1]]. But when I try frame[[0]] I get an error:
Traceback (most recent call last):
File "<ipython-input-55-0c56ffb47d0d>", line 1, in <module>
frame[[0]]
File "C:\Users\Robert\Desktop\Záloha\WinPython-64bit-3.5.2.2\python- 3.5.2.amd64\lib\site-packages\pandas\core\frame.py", line 1991, in __getitem__
return self._getitem_array(key)
File "C:\Users\Robert\Desktop\Záloha\WinPython-64bit-3.5.2.2\python- 3.5.2.amd64\lib\site-packages\pandas\core\frame.py", line 2035, in _getitem_array
indexer = self.ix._convert_to_indexer(key, axis=1)
File "C:\Users\Robert\Desktop\Záloha\WinPython-64bit-3.5.2.2\python- 3.5.2.amd64\lib\site-packages\pandas\core\indexing.py", line 1184, in _convert_to_indexer
indexer = labels._convert_list_indexer(objarr, kind=self.name)
File "C:\Users\Robert\Desktop\Záloha\WinPython-64bit-3.5.2.2\python- 3.5.2.amd64\lib\site-packages\pandas\indexes\base.py", line 1112, in _convert_list_indexer
return maybe_convert_indices(indexer, len(self))
File "C:\Users\Robert\Desktop\Záloha\WinPython-64bit-3.5.2.2\python- 3.5.2.amd64\lib\site-packages\pandas\core\indexing.py", line 1856, in maybe_convert_indices
raise IndexError("indices are out-of-bounds")
IndexError: indices are out-of-bounds
I can still use frame.iloc[:,0] but problem is that I dont understand why I cant use simple slicing by [[]]? I use winpython spyder 3 if that helps.
using your code:
import pandas as pd
var_vec = [i for i in range(100)]
num_of_sites = 100
row_names = ["_".join(["loc", str(i)]) for i in
range(1,num_of_sites + 1)]
frame = pd.DataFrame(var_vec, columns = ["Variable"], index = row_names)
spec_ab = [i**3 for i in range(100)]
frame[1] = spec_ab
if you ask to print out the 'frame' you get:
Variable 1
loc_1 0 0
loc_2 1 1
loc_3 2 8
loc_4 3 27
loc_5 4 64
loc_6 5 125
......
So the cause of your problem becomes obvious, you have no column called '0'.
At line one you specify a lista called var_vec.
At line 4 you make a dataframe out of that list, but you specify the index values and the column name (which is usually good practice).
The numerical column name, '0', '1',.. as in the first example, only takes place when you dont specify the column name, its not a column position indexer.
If you want to access columns by their position, you can:
df[df.columns[0]]
what happens than, is you get the list of columns of the df, and you choose the term '0' and pass it to the df as a reference.
hope that helps you understand
edit:
another way (better) would be:
df.iloc[:,0]
where ":" stands for all rows. (also indexed by number from 0 to range of rows)

OverflowError: size does not fit in an int

I am writing a python script to use in AzureML. My dataset is quite big. I have a dataset with columns called ID(int) and DataType(text). I would like to concatenate these values to just have one column with text that has both the ID and the DataType seperated by a comma.
How can I avoid getting an error when I do this. Do I have any mistakes in my code?
When i run this code I get the following error:
Error 0085: The following error occurred during script evaluation, please view the output log for more information:
---------- Start of error message from Python interpreter ----------
data:text/plain,Caught exception while executing function: Traceback (most recent call last):
File "C:\server\invokepy.py", line 167, in batch
idfs.append(rutils.RUtils.RFileToDataFrame(infile))
File "C:\server\RReader\rutils.py", line 15, in RFileToDataFrame
rreader = RReaderFactory.construct_from_file(filename, compressed)
File "C:\server\RReader\rreaderfactory.py", line 25, in construct_from_file
return _RReaderFactory.construct_from_stream(stream)
File "C:\server\RReader\rreaderfactory.py", line 46, in construct_from_stream
return RReader(BinaryReader(RFactoryConstants.big_endian, stream.read()))
File "C:\pyhome\lib\gzip.py", line 254, in read
self._read(readsize)
File "C:\pyhome\lib\gzip.py", line 313, in _read
self._add_read_data( uncompress )
File "C:\pyhome\lib\gzip.py", line 329, in _add_read_data
self.crc = zlib.crc32(data, self.crc) & 0xffffffffL
OverflowError: size does not fit in an int
My code is as below:
# The script MUST contain a function named azureml_main
# which is the entry point for this module.
#
# The entry point function can contain up to two input arguments:
# Param<dataframe1>: a pandas.DataFrame
# Param<dataframe2>: a pandas.DataFrame
def azureml_main(dataframe1):
import pandas as pd
dataframe1['SignalID,DataType'] = dataframe1['ID'] + " , " + dataframe1['DataType']
dataframe1 = dataframe1.drop('DataType')
dataframe1 = dataframe1.drop('ID')
# Return value must be of a sequence of pandas.DataFrame
return dataframe1
I get the same error when I run the default python code in AzureML. So I am pretty sure my data just does not fit in the data frame.
The default script is the following:
# The script MUST contain a function named azureml_main
# which is the entry point for this module.
#
# The entry point function can contain up to two input arguments:
# Param<dataframe1>: a pandas.DataFrame
# Param<dataframe2>: a pandas.DataFrame
def azureml_main(dataframe1 = None, dataframe2 = None):
# Execution logic goes here
print('Input pandas.DataFrame #1:\r\n\r\n{0}'.format(dataframe1))
# If a zip file is connected to the third input port is connected,
# it is unzipped under ".\Script Bundle". This directory is added
# to sys.path. Therefore, if your zip file contains a Python file
# mymodule.py you can import it using:
# import mymodule
# Return value must be of a sequence of pandas.DataFrame
return dataframe1,
If you need concatenate integer ID and string DataType columns to new column SignalID, use casting by astype. Then you can drop columns DataType and ID adding parameter axis=1:
import pandas as pd
def azureml_main(dataframe1):
dataframe1['SignalID'] = dataframe1['ID'].astype(str)
+ " , "
+ dataframe1['DataType']
dataframe1 = dataframe1.drop(['DataType', 'ID'], axis=1)
# Return value must be of a sequence of pandas.DataFrame
return dataframe1

Categories