I am trying to calculate the mean of different columns using groupby.
Here is my code.
However, as soon as I try to calculate mean, the error 'no numeric types to aggregate' appears. What is wrong with my code? Please help me!!! Thank you so much.
can you please post your code as text and some example data?
What are the contents of data['low_stress'] and data['high_stress']?
My guess is, you use pd.Series([low_stress]) and thereby instantiate a series of an array of your data. Using pd.Series(low_stress) will probably fix your problem.
Related
I am trying to format a pandas DataFrame value representation.
Basically, all I want is to get the "Thousand" separator on my values.
I managed to do it using the pd.style.format function. It does the job, but also "breaks" all my table original design.
here is an example of what is going on:
Is there anything I can do to avoid doing it? I want to keep the original table format, only changing the format of the value.
PS: Don't know if it makes any difference, but I am using Google Colab.
In case anyone is having the same problem as I was using Colab, I have found a solution:
.set_table_attributes('class="dataframe"') seems to solve the problem
More infos can be found here: https://github.com/googlecolab/colabtools/issues/1687
For this case you could do:
pdf.assign(a=pdf['a'].map("{:,.0f}".format))
I'm new to coding and having a hard time expressing/searching for the correct terms to help me along with this task. In my work I get some pretty large excel-files from people out in the field monitoring birds. The results need to be prepared for databases, reports, tables and more. I was hoping to use Python to automate some tasks for this.
How can I use Python (pandas?) to find certain rows/columns based on a common name/ID but with a unique suffix , and aggregate/sum the results that belongs together under that common name? As an example in the table provided I need get all the results from sub-localities e.g. AA3_f, AA3_lf and AA3_s expressed as the sum (total of gulls for each species) of the subs in a new row for the main Locality AA3.
Can someone please provide some code for this task, or help me in some other way? I have searched and watched many tutorials on python, numpy, pandas and also matplotlib .. still clueless on how to set this up
any help appreciated
Thanks!
Update:
#Harsh Nagouda, thanks for your reply. I tried your example using groupby function, but I having trouble dividing into correct groups. The "Locality" column has only unique values/ID because they all have a suffix (they are sub categories).
I tried to solve this by slicing the strings:
eng.Locality.str.slice(0,4,1)
i managed to slice off the suffices so that the remainders = AA3_ , AA4_ and so on.
Then i tried to do this slicing in the groupby function. That failed. Then I tried to slice using pandas.Dataframe.apply(). That failed as well.
eng["Locality"].apply(eng.Locality.str.slice(0,4,1))
sum = eng.groupby(["Locality"].str.slice(0,4,1)).sum()
Any more help out there? As you can see above - I need it :-)
In your case, the pd.groupby option seems to be a good fit for the problem. The groupby function does exactly what it means, it groups parts of the dataframe you like it to.
Since you mentioned a case based on grouping by localities and finding the sum of those values, this snippet should help you out:
sum = eng.groupby(["Locality"]).sum()
Additional commands and sorting styles can be found here:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html
I finally figured out a way to get it done. Maybe not the smoothest way, but at least I get the end result I need:
Edited the Locality-ID to remove suffix:eng["Locality"]=eng["Locality].str.slice(0,4,1)
Used the groupby function:sum = eng.groupby(["Locality"]).sum()
End result:
Table
Here are my data and index value image :
As in the snap pandas Dataframe returning two values. What could be possibly wrong? I am beginner, sorry for the bad editing.
I think I see the issue.
data['Title'].iloc[0]
Try something like this. I think the .head() portion of the code is causinng you issues
I am trying to make an upset plot using gene-disease association lists. I assume that I simply do not understand which data type is required as an input as most examples use artificially created datasets that are of the data type "int64".
Upsetplot: https://buildmedia.readthedocs.org/media/pdf/upsetplot/latest/upsetplot.pdf and https://pydigger.com/pypi/UpSetPlot
I copied the examples given in the links above and they work just fine. When I try my own dataset I get the error message: AttributeError: 'Index' object has no attribute 'levels'
The data I use as input is a data frame with boolean information (see attachment "mydata.png" mydata boolean df). So I have the diseases as columns, the genes as rows and then boolean statements about the specific gene being associated with that disease or not (I can make this sound more computational if required).
An example data set that works can be found in the documentation or in the screenshot "upsetplot_data_example.png" upsetplot_data_example. In the documentation is says something about "category membership", but I do not quite understand what data type that is.
I assume it is a basic issue of not understanding what "format" is required. If anyone has an idea of what I need to do, please let me know. I welcome all feedback. I do not expect anyone to actually do the coding for me, however some pointers would be so helpful.
Thanks everyone!
The recently released Data Format Guide might prove helpful. Perhaps you need to set those boolean columns as the index of your data frame before passing it in, although ultimately, it may be easier to use from_contents or from_memberships to describe your data.
However, upsetplot will hopefully make the input format easier in a future version.
I'm using the function del df['column name'] to delete the column in Pandas but there is the error as the attached picture. I have no idea why it does not work. Much appreciated for any help to solve the problem.
You should use the drop method instead.
df.drop(columns='column_name')
And if you want to chage the original Dataframe you should add the inplace=True as an argument to the method.
Also, avoid posting pictures if possible. Posting the written code is often more usufel and makes it easier for someone to help you!