Dataframe with empty column in the data - python
I have a list of lists with an header row and then the different value rows.
It could happen that is some cases the last "column" has an empty value for all the rows (if just a row has a value it works fine), but DataFrame is not happy about that as the number of columns differs from the header.
I'm thinking to add a None value to the first list without any value before creating the DF, but I wondering if there is a better way to handle this case?
data = [
["data1", "data2", "data3"],
["value11", "value12"],
["value21", "value22"],
["value31", "value32"]]
headers = data.pop(0)
dataframe = pandas.DataFrame(data, columns = headers)
You could do this:
import pandas as pd
data = [
["data1", "data2", "data3"],
["value11", "value12"],
["value21", "value22"],
["value31", "value32"]
]
# create dataframe
df = pd.DataFrame(data)
# set new column names
# this will use ["data1", "data2", "data3"] as new columns, because they are in the first row
df.columns = df.iloc[0].tolist()
# now that you have the right column names, just jump the first line
df = df.iloc[1:].reset_index(drop=True)
df
data1 data2 data3
0 value11 value12 None
1 value21 value22 None
2 value31 value32 None
Is this that you want?
You can use pd.reindex function to add missing columns. You can possibly do something like this:
import pandas as pd
df = pd.DataFrame(data)
# To prevent throwing exception.
df.columns = headers[:df.shape[1]]
df = df.reindex(headers,axis=1)
Related
Create a pandas DataFrame where each cell is a set of strings
I am trying to create a DataFrame like so: col_a col_b {'soln_a'} {'soln_b'} In case it helps, here are some of my failed attempts: import pandas as pd my_dict_a = {"col_a": set(["soln_a"]), "col_b": set("soln_b")} df_0 = pd.DataFrame.from_dict(my_dict_a) # ValueError: All arrays must be of the same length df_1 = pd.DataFrame.from_dict(my_dict_a, orient="index").T # splits 'soln_b' into individual letters my_dict_b = {"col_a": ["soln_a"], "col_b": ["soln_b"]} df_2 = pd.DataFrame(my_dict_b).apply(set) # TypeError: 'set' type is unordered df_3 = pd.DataFrame.from_dict(my_dict_b, orient="index").T # creates DataFrame of lists df_3.apply(set, axis=1) # combines into single set of {soln_a, soln_b} What's the best way to do this?
You just need to ensure your input data structure is formatted correctly. The (default) dictionary -> DataFrame constructor, asks for the values in the dictionary be a collection of some type. You just need to make sure you have a collection of set objects, instead of having the key link directly to a set. So, if I change my input dictionary to have a list of sets, then it works as expected. import pandas as pd my_dict = { "col_a": [{"soln_a"}, {"soln_c"}], "col_b": [{"soln_b", "soln_d"}, {"soln_c"}] } df = pd.DataFrame.from_dict(my_dict) print(df) col_a col_b 0 {soln_a} {soln_d, soln_b} 1 {soln_c} {soln_c}
You could apply a list comprehension on the columns: my_dict_b = {"col_a": ["soln_a"], "col_b": ["soln_b"]} df_2 = pd.DataFrame(my_dict_b) df_2 = df_2.apply(lambda col: [set([x]) for x in col]) Output: col_a col_b 0 {soln_a} {soln_b}
Why not something like this? df = pd.DataFrame({ 'col_a': [set(['soln_a'])], 'col_b': [set(['soln_b'])], }) Output: >>> df col_a col_b 0 {soln_a} {soln_b}
Count occurrence of column values in other dataframe column
I have two dataframes and I want to count the occurrence of "classifier" in "fullname". My problem is that my script counts a word like "carrepair" only for one classifier and I would like to have a count for both classifiers. I would also like to add one random coordinate that matches the classifier. First dataframe: Second dataframe: Result so far: Desired Result: My script so far: import pandas as pd fl = pd.read_excel (r'fullname.xlsx') clas= pd.read_excel (r'classifier.xlsx') fl.fullname= fl.fullname.str.lower() clas.classifier = clas.classifier.str.lower() pat = '({})'.format('|'.join(clas['classifier'].unique())) fl['fullname'] = fl['fullname'].str.extract(pat, expand = False) clas['count_of_classifier'] = clas['classifier'].map(fl['fullname'].value_counts()) print(clas) Thanks!
You could try this: import pandas as pd fl = pd.read_excel (r'fullname.xlsx') clas= pd.read_excel (r'classifier.xlsx') fl.fullname= fl.fullname.str.lower() clas.classifier = clas.classifier.str.lower() # Add a new column to 'fl' containing either 'repair' or 'car' for value in clas["classifier"].values: fl.loc[fl["fullname"].str.contains(value, case=False), value] = value # Count values and create a new dataframe new_clas = pd.DataFrame( { "classifier": [col for col in clas["classifier"].values], "count": [fl[col].count() for col in clas["classifier"].values], } ) # Merge 'fl' and 'new_clas' new_clas = pd.merge( left=new_clas, right=fl, how="left", left_on="classifier", right_on="fullname" ).reset_index(drop=True) # Keep only expected columns new_clas = new_clas.reindex(columns=["classifier", "count", "coordinate"]) print(new_clas) # Outputs classifier count coordinate repair 3 52.520008, 13.404954 car 3 54.520008, 15.404954
Add values from a nested JSON to a pandas dataframe
I have the following JSON object: {"code":"Ok","matchings":[{"confidence":0.025755,"geometry":"qnp{bBww{kH??~D_I}E_J{EaJ{E{I{AsCoJgQfKuTjJwNtF}HdBuBnAgBpFsF~EeEzAsAt#i#lA}#x#q#lEmCjDuBdDoAvFmAfYmEtAUrJyDj#_#h#m#`#u#T}#J{#B_A?gAGmAM}#Su#]u#wN{QwI{KcA}Aa#gASiAWsBOwCGmDCoJ??cEH?{FA{HgIXuG`#eHrAsLdDkI|CkIfDq#VoDlB_GzDaE`D_A|#kA`AeAx#sI~G}DlDk#j#mClCiOrQwGvJiGxJoFdK_HjP{Pne#aLt\\sK~]oKb_#sG~TeJ`_#q#fD{#dEoBlMwBxQaAbI{Dh\\wKrfAiRbvBy#`KaLjwAyHj_AANM~AUxC}#tKi#bHe#jGfBj#t#V|#\\TFjAXz#HhASxAy#vCcBjX~GvG`BlEjAv\\xJfBf#dThG~Ad#nFrBnCbBdCvBzB`DbCfEr{#b~A","legs":[{"annotation":{"nodes":[330029575,5896466632,330029575,5896466588,5896466587,5896466586,5896466637,330029340,330029339,330029338,1497356855,1880770263,46388213,1880770262,1880770257,2021835257,3306177380,46387099,2021835255,6909770873,46385948,6909770874,46384887,46382454]},"steps":[],"distance":332.2,"duration":93.1,"summary":"","weight":93.1},{"annotation":{"nodes":[46384887,46382454,5888264001,6909802199,3296872014,6909802198,5888264003,6909802197,3296872012,6909802194,6909802195,6909802193,6909802196,3296872013,3296872015]},"steps":[],"distance":88.1,"duration":13.5,"summary":"","weight":13.5},{"annotation":{"nodes":[3296872013,3296872015,6909802186,6909802187,6909770884,3296872017,6909802185,4904066416,3296872018,1614187163]},"steps":[],"distance":62.3,"duration":12.4,"summary":"","weight":12.4},{"annotation":{"nodes":[3296872018,1614187163,2054127599,1614187129,5896479942,6909802219,46384372,1027299576,6909802220,46389815]},"steps":[],"distance":144,"duration":25.2,"summary":"","weight":25.2},{"annotation":{"nodes":[6909802220,46389815,6296436095,6296436094,298079716,6296436096,46391324,1083528076,6909802221,6909802222,46393158]},"steps":[],"distance":90.6,"duration":10.1,"summary":"","weight":10.1},{"annotation":{"nodes":[6909802222,46393158,46393795,6909802223,1027299602,6909802224,46396846,46398397,2054127645,46399502,46400708,1027299589,6712474212,6903665704,46402805,46403163,4374153462]},"steps":[],"distance":422.9,"duration":40.1,"summary":"","weight":40.1},{"annotation":{"nodes":[46403163,4374153462,46404084,1027299603,364146312,2262500170]},"steps":[],"distance":273.6,"duration":24.7,"summary":"","weight":24.7},{"annotation":{"nodes":[364146312,2262500170,5289718695]},"steps":[],"distance":170.9,"duration":15.3,"summary":"","weight":15.3},{"annotation":{"nodes":[2262500170,5289718695,2054127657,1693195716,46408565,6913837768,1693195721,2262500247,1693195714,2262500104,1693195717]},"steps":[],"distance":56.9,"duration":14.2,"summary":"","weight":14.2},{"annotation":{"nodes":[46397705,46401323,46405521]},"steps":[],"distance":86.6,"duration":12.6,"summary":"","weight":12.6},{"annotation":{"nodes":[46401323,46405521,46410773]},"steps":[],"distance":156.5,"duration":22.5,"summary":"","weight":22.5},{"annotation":{"nodes":[46405521,46410773,452003319,452003320]},"steps":[],"distance":95.4,"duration":13.8,"summary":"","weight":13.8},{"annotation":{"nodes":[452003319,452003320,46411428,46414457,46419384,46421801]},"steps":[],"distance":226.4,"duration":32.6,"summary":"","weight":32.6},{"annotation":{"nodes":[46419384,46421801,46421802,46421735]},"steps":[],"distance":69.2,"duration":10,"summary":"","weight":10},{"annotation":{"nodes":[46421802,46421735,46421416]},"steps":[],"distance":34.1,"duration":4.9,"summary":"","weight":4.9},{"annotation":{"nodes":[46421735,46421416,46420466]},"steps":[],"distance":2.7,"duration":0.3,"summary":"","weight":0.3},{"annotation":{"nodes":[46421416,46420466]},"steps":[],"distance":31.4,"duration":4.6,"summary":"","weight":4.6},{"annotation":{"nodes":[46421416,46420466,452003307,452003308,46421260,46422467,5761752102,46423905]},"steps":[],"distance":135.5,"duration":25,"summary":"","weight":25},{"annotation":{"nodes":[5761752102,46423905,46424346,5777055555,5713213408,46425605,5777055050,5777346784,5777055556,5713221227,46426685,46427741,3175895442,3183752428,5826014405,46428227]},"steps":[],"distance":106.5,"duration":14.9,"summary":"","weight":14.9},{"annotation":{"nodes":[5826014405,46428227,3175895443,5826014406,3175895444,5826014368,5826014369,5826014374,46429570,5826014373,5826014375,5826014372,5826014358,5826014371,5826014370,5826014376]},"steps":[],"distance":172.7,"duration":15.7,"summary":"","weight":15.7},{"annotation":{"nodes":[2054127660,2054127638,2054127605,6296435009,2054127599,6909770882,3296872018,4904066416,6909802185,3296872017,6909770884,6909802187,6909802186,3296872015,3296872013,6909802196,6909802193,6909802195,6909802194,3296872012,6909802197,5888264003,6909802198,3296872014,6909802199,5888264001,46382454,46384887,6909770874,46385948,6909770873,2021835255,46387099,3306177380,2021835257]},"steps":[],"distance":317.7,"duration":46.1,"summary":"","weight":46.1},{"annotation":{"nodes":[3306177380,2021835257,1880770257,1880770262,46388213,1880770263,1497356855,330029338,330029339,330029340,5896466637]},"steps":[],"distance":150.4,"duration":29.4,"summary":"","weight":29.4}],"distance":80317.8,"duration":10983.5,"weight_name":"duration","weight":10983.5}],"tracepoints":[{"alternatives_count":0,"waypoint_index":0,"matchings_index":0,"location":[4.929932,52.372217],"name":"Willem Theunisse Blokstraat","distance":10.791613,"hint":"CAkHgHAJBwAlAAAAAAAAAAAAAAAAAAAALCd0QQAAAAAAAAAAAAAAACUAAAAAAAAAAAAAAAAAAAABAAAAjDlLAPkiHwP3OEsAGiMfAwAArxMz7Ejh"},null,{"alternatives_count":0,"waypoint_index":1,"matchings_index":0,"location":[4.932506,52.3709],"name":"Frans de Wollantstraat","distance":11.915926,"hint":"pwUBAPYEAYAHAAAARwAAAAAAAAAAAAAA3_qaQE0JPUIAAAAAAAAAAAcAAABHAAAAAAAAAAAAAAABAAAAmkNLANQdHwPtQksAxB0fAwAA_xUz7Ejh"},{"alternatives_count":0,"waypoint_index":472,"matchings_index":0,"location":[4.932745,52.373288],"name":"Piet Heinkade","distance":0.98867,"hint":"gwUBgMgFAQAFAAAADQAAABoBAABYAAAAQMS3QHTNW0HsWZ1DmZ2WQgUAAAANAAAAGgEAAFgAAAABAAAAiURLACgnHwN9REsAIycfAwoADwkz7Ejh"},null,null,{"alternatives_count":1,"waypoint_index":473,"matchings_index":0,"location":[4.934022,52.371637],"name":"Piet Heinkade","distance":2.713742,"hint":"NA8HADsPB4ACAAAADwAAADoAAAA-AAAAjU82QIAqg0FUpSdCLoWJQgIAAAAPAAAAOgAAAD4AAAABAAAAhklLALUgHwNfSUsAsCAfAwQAvxUz7Ejh"},null,null,{"alternatives_count":1,"waypoint_index":474,"matchings_index":0,"location":[4.93213,52.371794],"name":"Frans de Wollantstraat","distance":10.337677,"hint":"AgUBgAcFAQABAAAABAAAAAwAAAAAAAAA1paeP-KrBUAomAdBAAAAAAEAAAAEAAAADAAAAAAAAAABAAAAIkJLAFIhHwOrQksAeiEfAwIA7xQz7Ejh"},{"alternatives_count":1,"waypoint_index":475,"matchings_index":0,"location":[4.93074,52.372528],"name":"Isaac Titsinghkade","distance":0.65222,"hint":"AwkHgAYJBwA5AAAACwAAAAAAAACMAAAA_Fe_QWP_k0AAAAAA33FqQjkAAAALAAAAAAAAAIwAAAABAAAAtDxLADAkHwOtPEsANCQfAwAADw4z7Ejh"},null,null]} I want to add all values that belong to the key nodes to one column in a pandas dataframe When I run: for i in output["matchings"][0]['legs']: result = i['annotation']['nodes'] df = pd.DataFrame(result, columns=['node']) df only a fraction gets added to the dataframe. What am I doing wrong?
At the end of your for loop, 'df' keeps the last 'node' key of your json. You have to append all 'nodes' keys in a single dataframe instead. Extending your code: df = pd.DataFrame({'node':{}}) for i in output["matchings"][0]['legs']: result = i['annotation']['nodes'] df_temp = pd.DataFrame(result, columns=['node']) df = df.append(df_temp, ignore_index=True)
Using Panda, Update column values based on a list of ID and new Values
I have a df with and ID and Sell columns. I want to update the Sell column, using a list of new Sells (not all raws need to be updated - just some of them). In all examples I have seen, the value is always the same or is coming from a column. In my case, I have a dynamic value. This is what I would like: file = ('something.csv') # Has 300 rows IDList= [['453164259','453106168','453163869','453164463'] # [ID] SellList=[120,270,350,410] # Sells values csv = path_pattern = os.path.join(os.getcwd(), file) df = pd.read_csv(file) df.loc[df['Id'].isin(IDList[x]), 'Sell'] = SellList[x] # Update the rows with the corresponding Sell value of the ID. df.to_csv(file) Any ideas? Thanks in advance
Assuming 'id' is a string (as mentioned in IDList) & is not index of your df IDList= [['453164259','453106168','453163869','453164463'] # [ID] SellList=[120,270,350,410] id_dict={x:y for x,y in zip(IDList,SellList)} for index,row in df.iterrows(): if row['id'] in IDList: df.loc[str(index),'Sell']=id_dict[row['id']] If id is index: IDList= [['453164259','453106168','453163869','453164463'] # [ID] SellList=[120,270,350,410] id_dict={x:y for x,y in zip(IDList,SellList)} for index,row in df.iterrows(): if index in IDList: df.loc[str(index),'Sell']=id_dict[index] What I did is created a dictionary using IDlist & SellList & than looped over the df using iterrows()
df = pd.read_csv('something.csv') IDList= ['453164259','453106168','453163869','453164463'] SellList=[120,270,350,410] This will work efficiently, specially for large files: df.set_index('id', inplace=True) df.loc[IDList, 'Sell'] = SellList df.reset_index() ## not mandatory, just in case you need 'id' back as a column df.to_csv(file)
When using a pandas dataframe, how do I add column if does not exist?
I'm new to using pandas and am writing a script where I read in a dataframe and then do some computation on some of the columns. Sometimes I will have the column called "Met": df = pd.read_csv(File, sep='\t', compression='gzip', header=0, names=["Chrom", "Site", "coverage", "Met"] ) Other times I will have: df = pd.read_csv(File, sep='\t', compression='gzip', header=0, names=["Chrom", "Site", "coverage", "freqC"] ) I need to do some computation with the "Met" column so if it isn't present I will need to calculate it using: df['Met'] = df['freqC'] * df['coverage'] is there a way to check if the "Met" column is present in the dataframe, and if not add it?
You check it like this: if 'Met' not in df: df['Met'] = df['freqC'] * df['coverage']
When interested in conditionally adding columns in a method chain, consider using pipe() with a lambda: df.pipe(lambda d: ( d.assign(Met=d['freqC'] * d['coverage']) if 'Met' not in d else d ))
If you were creating the dataframe from scratch, you could create the missing columns without a loop merely by passing the column names into the pd.DataFrame() call: cols = ['column 1','column 2','column 3','column 4','column 5'] df = pd.DataFrame(list_or_dict, index=['a',], columns=cols)
Alternatively you can use get: df['Met'] = df.get('Met', df['freqC'] * df['coverage']) If the column Met exists, the values inside this column are taken. Otherwise freqC and coverage are multiplied.