I think I have a pretty straightforward question here. Essentially I have a table with one column where each row contains a set of values that had previously been converted from a JSON string.
For example, here is one cell value for the column "options":
[u'Tide Liquid with a Touch of Downy April Fresh 69oz', u'Tide Liquid with Febreze Spring & Renewal 69oz (HE or Non-HE)', u'Tide Liquid HE with Febreze Sport 69oz', u'Tide Liquid HE Clean Breeze 75oz', u'Tide Liquid Original 75oz', u'Other']
I want to add a new column that simply counts the number of values in this list. I can do this row by row using a code like such:
df['num_choices'] = len(df.loc[row_num,'options'])
(i.e. I want to count the number of values in the column "options" and return that count in a new column called "num_choices")
Running this on the provided example above (with the input row#) will create a new column next to it with the value 6, since the count of options is 6.
How can I do this systematically for all 5,000 of my rows?
I tried to do this with Pandas iterrow() function, but I've been told that would be way less efficient than simply using NumPy ndArray vectorization. But I can't seem to figure out how to do that.
Thanks so much for your help!
As mentioned in the comments above, there's not really any way to vectorize operations on arrays that contain arbitrary Python objects.
I don't think you can do much better than using a simple for loop or list comprehension, e.g.:
df['num_choices'] = np.array([len(row) for row in df.options])
Related
I have posted a similar question before, but I thought it'd be better to elaborate it in another way. For example, I have a dataframe of compounds assigned to a number, as it follows:
compound,number
17alpha_beta_SID_24898755,8
2_prolinal_109328,3
4_chloro_4491,37
5HT_144234_01,87
5HT_144234_02,2
6-OHDA_153466,23
Also, there is another dataframe with other properties, as well as the compound names, but not only with its corresponding numbers, there are rows in which the compound names are assigned to different numbers - these cases where there are differences are not of interest:
rmsd,chemplp,plp,compound,number
1.00,14.00,-25.00,17alpha_beta_SID_24898755,7
0.38,12.00,-19.00,17alpha_beta_SID_24898755,8
0.66,16.00,-25.6,17alpha_beta_SID_24898755,9
0.87,24.58,-38.35,2_prolinal_109328,3
0.17,54.58,-39.32,2_prolinal_109328,4
0.22,22.58,-32.35,2_prolinal_109328,5
0.41,45.32,-37.90,4_chloro_4491,37
0.11,15.32,-37.10,4_chloro_4491,38
0.11,15.32,-17.90,4_chloro_4491,39
0.61,38.10,-45.86,5HT_144234_01,85
0.62,18.10,-15.86,5HT_144234_01,86
0.64,28.10,-25.86,5HT_144234_01,87
0.64,16.81,-10.87,5HT_144234_02,2
0.14,16.11,-10.17,5HT_144234_02,3
0.14,16.21,-10.17,5HT_144234_02,4
0.15,31.85,-24.23,6-OHDA_153466,23
0.13,21.85,-34.23,6-OHDA_153466,24
0.11,11.85,-54.23,6-OHDA_153466,25
The problem is that I want to find each compound and its corresponding number from dataframe 1 in dataframe 2, and return its entire row.
I was only able to do this (but due to the way the iteration goes in this case, it doesn't work for what I intend to):
import numpy as np
import csv
import pandas as pd
for c1,n1,c2,n2 in zip(df1.compound,df1.number,df2.compound,df2.number):
if c1==c2 and n1==n2:
print(df2[*])
I wanted to print the entire row in which c1==c2 and n1==n2.
Example: for 17alpha_beta_SID_24898755 (compound) 8 (its number) in dataframe 1, return the row in which this compound and this number is found in dataframe 2. The result should be:
0.38,12.00,-19.00,17alpha_beta_SID_24898755,8
I'd like to do this for all the compounds and its corresponding numbers from dataframe1. The example I gave was only a small set from an extremely extensive list. If anyone could help, thank you!
Take a look at df.merge method:
df1.merge(df2, on=['compound', 'number'], how='inner')
I have a dataframe containing a column called bmi (Body Mass Index) containing int values
I have to separate the values in bmi column into Under weight, Normal, Over weight and Obese based on the values. Below is the loop for the same
However I am getting an error. I am a beginner. Just started coding 2 weeks back.
Generally speaking, using a for loop in pandas is usually a bad idea. Pandas allows you to manipulate data easily. For example, if you want to filter by some condition:
print(df[df["bmi"] > 30])
will print all rows where there bmi>30. It works as follows: df[condition]. Condition in this case is "bmi" is larger then 30, so our condition is df["bmi"] > 30. Notice the line df[df["bmi"] > 30] returns all rows that satisfy the condition. I printed them, but you can manipulate them whatever you like.
Even though it's a bad technique (or used only for specific need), you can of course iterate through dataframe. This is not done via for l in df, as df is a dataframe object. To iterate through it you can use iterrows:
for index, row in df.iterrows():
if (row["bmi"] > 30)
print("Obese")
Also for next time please provide your code inline. Don't paste an image of it
If your goal is to separate into different labels, I suggest the following:
df.loc[df[df["bmi"] > 30, "NewColumn"] = "Obese"
df.loc[df[df["bmi"] < 18.5, "NewColumn"] = "Underweight"
.loc operator allows me to manipulate only part of the data. It's format is [rows, columns]. So the above code takes on rows where bmi>30, and it takes only "NewColumn" (change it whatever you like) which is a new column. It puts the value on the right to this column. That way, after that operation, you have a new column in your dataframe which has "Obese/Underweight" as you like.
As side note - there are better ways to map values (e.g pandas' map and others) but if you are a beginner, it's important to understand simple methods to manipulate data before diving into more complex one. That's why I am avoiding into explaining more complex method
First of all, As mentioned in the comment you should post text/code instead of screenshots.
You could do binning in pandas:
bmi_labels = ['Normal', 'Overweight', 'Obese']
cut_bins = [18.5, 24.9, 29.9, df["bmi"].max()]
df['bmi_label'] = pd.cut(df['bmi'], bins=cut_bins, labels=bmi_labels)
Here, i have made a seperate column (bmi_label) to store label but you could can do it in same column (bmi) too.
So I have this giant matrix (~1.5 million rows x 7 columns) and am trying to figure out an efficient way to split it up. For simplicity of what I'm trying to do, I'll work with this much smaller matrix as an example for what I'm trying to do. The 7 columns consist of (in this order): item number, an x and y coordinate, 1st label (non-numeric), data #1, data #2, and 2nd label (non-numeric). So using pandas, I've imported from an excel sheet my matrix called A that looks like this:
What I need to do is partition this based on both labels (i.e. so I have one matrix that is all the 13G + Aa together, another matrix that is 14G + Aa, and another one that is 14G + Ab together -- this would have me wind up with 3 separate 2x7 matrices). The reason for this is because I need to run a bunch of statistics on the dataset of numbers of the "Marker" column for each individual matrix (e.g. in this example, break the 6 "marker" numbers into three sets of 2 "marker" numbers, and then run statistics on each set of two numbers). Since there are going to be hundreds of these smaller matrices on the real data set I have, I was trying to figure out some way to make the smaller matrices be labeled something like M1, M2, ..., M500 (or whatever number it ends up being) so that way later, I can use some loops to apply statistics to each individual matrix all at once without having to write it 500+ times.
What I've done so far is to use pandas to import my data set into python as a matrix with the command:
import pandas as pd
import numpy as np
df = pd.read_csv(r"C:\path\cancerdata.csv")
A = df.as_matrix() #Convert excel sheet to matrix
A = np.delete(A, (0),axis=0) #Delete header row
Unfortunately, I haven't come across many resources for how to do what I want, which is why I wanted to ask here to see if anyone knows how to split up a matrix into smaller matrices based on multiple labels.
Your question has many implications, so instead of giving you a straight answer I'll try to give you some pointers on how to tackle this problem.
First off, don't transform your DataFrame into a Matrix. DataFrames are well-optimised for slicing and indexing operations (a Pandas Series object is in reality a fancy Numpy array anyway), so you only lose functionality by converting it to a Matrix.
You could probably convert your label columns into a MultiIndex. This way, you'll be able to access slices of your original DataFrame using df.loc, with a syntax similar to df.loc[label1].loc[label2].
A MultiIndex may sound confusing at first, but it really isn't. Try executing this code block and see for yourself how the resulting DataFrame looks like:
df = pd.read_csv("C:\path\cancerdata.csv")
labels01 = df["Label 1"].unique()
labels02 = df["Label 2"].unique()
index = pd.MultiIndex.from_product([labels01, labels02])
df.set_index(index, inplace=True)
print(df)
Here, we extracted all unique values in the columns "Label 1" and "Label 2", and created an MultiIndex based on all possible combinations of Label 1 vs. Label 2. In the df.set_index line, we extracted those columns from the DataFrame - now they act as indices for your other columns. For example, in order to access the DataFrame slice from your original DataFrame whose Label 1 = 13G and Label 2 = Aa, you can simply write:
sliced_df = df.loc["13G"].loc["Aa"]
And perform whatever calculations/statistics you need with it.
Lastly, instead of saving each sliced DataFrame into a list or dictionary, and then iterating over them to perform the calculations, consider rearranging your code so that, as soon as you create your sliced DataFrame, you peform the calculations, save them to a output/results file/DataFrame, and move on to the next slicing operation. Something like:
for L1 in labels01:
for L2 in labels02:
sliced_df = df.loc[L1].loc[L2]
results = perform_calculations(sub_df)
save_results(results)
This will both improve memory consumption and performance, which may be important considering your large dataset.
I am working on some huge volume of data, rows around 50 millions.
I want to find unique columns values from multiple columns. I use below script.
dataAll[['Frequency', 'Period', 'Date']].drop_duplicates()
But this is taking long time, more than 40minutes.
I found some alternative:
pd.unique(dataAll[['Frequency', 'Period', 'Date']].values.ravel('K'))
but above script will give array, but I need in dataframe like first script will give as below
Generaly your new code is imposible convert to DataFrame, because:
pd.unique(dataAll[['Frequency', 'Period', 'Date']].values.ravel('K'))
create one big 1d numpy array, so after remove duplicates is impossible recreate rows.
E.g. if there are 2 unique values 3 and 1 is impossible find which datetimes are for 3 and for 1.
But if there is only one unique value for Frequency and for each Period is possible find Date like in sample, solution is possible.
EDIT:
One possible alternative is use dask.dataframe.DataFrame.drop_duplicates.
I have two dataframes with different lengths(df,df1). They share one similar label "collo_number". I want to search the second dataframe for every collo_number in the first data frame. Problem is that the second date frame contains multiple rows for different dates for every collo_nummer. So i want to sum these dates and add this in a new column in the first database.
I now use a loop but it is rather slow and has to perform this operation for al 7 days in a week. Is there a way to get a better performance? I tried multiple solutions but keep getting the error that i cannot use the equal sign for two databases with different lenghts. Help would really be appreciated! Here is an example of what is working but with a rather bad performance.
df5=[df1.loc[(df1.index == nasa) & (df1.afleverdag == x1) & (df1.ind_init_actie=="N"), "aantal_colli"].sum() for nasa in df.collonr]
Your description is a bit vague (hence my comment). First what you good do is to select the rows of the dataframe that you want to search:
dftmp = df1[(df1.afleverdag==x1) & (df1.ind_init_actie=='N')]
so that you don't do this for every item in the loop.
Second, use .groupby.
newseries = dftmp['aantal_colli'].groupby(dftmp.index).sum()
newseries = newseries.ix[df.collonr.unique()]