I have a dataframe with 2 columns.
Col1: String, Col2:String.
I want to create a dict like {'col1':'col2'}.
For example, the below csv data:
var1,InternalCampaignCode
var2,DownloadFileName
var3,ExternalCampaignCode
has to become :
{'var1':'InternalCampaignCode','var2':'DownloadFileName', ...}
The dataframe is having around 200 records.
Please let me know how to achieve this.
The following should do the trick:
df_as_dict = map(lambda row: row.asDict(), df.collect())
Note that this is going to generate a list of dictionaries, where each dictionary represents a single record of your pyspark dataframe:
[
{'Col1': 'var1', 'Col2': 'InternalCampaignCode'},
{'Col1': 'var2', 'Col2': 'DownloadFileName'},
{'Col1': 'var3', 'Col3': 'ExternalCampaignCode'},
]
You can do a dict comprehension:
result = {r[0]: r[1] for r in df.collect()}
which gives
{'var1': 'InternalCampaignCode', 'var2': 'DownloadFileName', 'var3': 'ExternalCampaignCode'}
Related
Currently I have a dataframe.
ID
A
B
123
a
b
456
c
d
I would like to convert this into a dictionary, where the key of the dictionary is the "ID" column. The value of the dictionary would be another dictionary, where the keys of that dictionary are the name of the other columns, and the value of that dictionary would be the corresponding column value. Using the example above, this would look like:
{ 123 : { A : a, B : b}, 456 : {A : c, B : d} }
I have tried:
mydataframe.set_index("ID").to_dict() , but this results in a different format than the one wanted.
You merely need to pass the proper orient parameter, per the documentation.
import io
pd.read_csv(io.StringIO('''ID A B
123 a b
456 c d'''), sep='\s+').set_index('ID').to_dict(orient='index')
{123: {'A': 'a', 'B': 'b'}, 456: {'A': 'c', 'B': 'd'}}
Of course, the columns maintain their string types, as indicated by the quote marks.
Consider the following:
import pandas as pd
df = pd.DataFrame({'ID':[1,2,3], 'A':['x','y','z'], 'B':[111,222,333]})
What you're going for would be returned with the following two lines:
df.set_index('ID', inplace=True)
some_dict = {i:dict(zip(row.keys(), row.values)) for i, row in df.iterrows()}
With the output being equal to:
{1: {'A': 'x', 'B': 111}, 2: {'A': 'y', 'B': 222}, 3: {'A': 'z', 'B': 333}}
I have a pandas dataframe as below. I'm just wondering if there's any way to have my column values as my key to the json.
df:
|symbol | price|
|:------|------|
|a. |120|
|b. |100|
|c |200|
I expect the json to look like {'a': 120, 'b': 100, 'c': 200}
I've tried the below and got the result as {symbol: 'a', price: 120}{symbol: 'b', price: 100}{symbol: 'c', price: 200}
df.to_json('price.json', orient='records', lines=True)
Let's start by creating the dataframe that OP mentions
import pandas as pd
df = pd.DataFrame({'symbol': ['a', 'b', 'c'], 'price': [120, 100, 200]})
Considering that OP doesn't want the JSON values as a list (as OP commented here), the following will do the work
df.groupby('symbol').price.apply(lambda x: x.iloc[0]).to_dict()
[Out]: {'a': 120, 'b': 100, 'c': 200}
If one wants the JSON values as a list, the following will do the work
df.groupby('symbol').price.apply(list).to_json()
[Out]: {"a":[120],"b":[100],"c":[200]}
Try like this :
import pandas as pd
d = {'symbol': ['a', 'b', 'c'], 'price': [120, 100, 200]}
df = pd.DataFrame(data=d)
print(df)
print (df.set_index('symbol').rename(columns={'price':'json_data'}).to_json())
# EXPORT TO FILE
df.set_index('symbol').rename(columns={'price':'json_data'}).to_json('price.json')
Output :
symbol price
0 a 120
1 b 100
2 c 200
{"json_data":{"a":120,"b":100,"c":200}}
I have a simple DataFrame:
Name Format
0 cntry int
1 dweight str
2 pspwght str
3 pweight str
4 nwspol str
I want a dictionairy as such:
{
"cntry":"int",
"dweight":"str",
"pspwght":"str",
"pweight":"str",
"nwspol":"str"
}
Where dict["cntry"] would return int or dict["dweight"] would return str.
How could I do this?
How about this:
import pandas as pd
df = pd.DataFrame({'col_1': ['A', 'B', 'C', 'D'], 'col_2': [1, 1, 2, 3], 'col_3': ['Bla', 'Foo', 'Sup', 'Asdf']})
res_dict = dict(zip(df['col_1'], df['col_3']))
Contents of res_dict:
{'A': 'Bla', 'B': 'Foo', 'C': 'Sup', 'D': 'Asdf'}
You're looking for DataFrame.to_dict()
From the documentation:
>>> df = pd.DataFrame({'col1': [1, 2],
... 'col2': [0.5, 0.75]},
... index=['row1', 'row2'])
>>> df
col1 col2
row1 1 0.50
row2 2 0.75
>>> df.to_dict()
{'col1': {'row1': 1, 'row2': 2}, 'col2': {'row1': 0.5, 'row2': 0.75}}
You can always invert an internal dictionary if it's not mapped how you'd like it to be:
inv_dict = {v: k for k, v in original_dict['Name'].items()}
I think you want is:
df.set_index('Name').to_dict()['Format']
Since you want to use the values in the Name column as the keys to your dict.
Note that you might want to do:
df.set_index('Name').astype(str).to_dict()['Format']
if you want the values of the dictionary to be strings.
I have a dict of symbol: DataFrame. Each DataFrame is a time series with an arbitrary number of columns. I want to transform this data structure into a unique time series DataFrame (indexed by date) where each column contains the values of a symbol as a dict.
The following code does what I want, but is slow when it is performed on a dict with hundreds of symbols and DataFrames of 10k rows / 10 columns. I'm looking for ways to improve its speed.
import pandas as pd
dates = pd.bdate_range('2010-01-01', '2049-12-31')[:100]
data = {
'A': pd.DataFrame(data={'col1': range(100), 'col2': range(200, 300)}, index=dates),
'B': pd.DataFrame(data={'col1': range(100), 'col2': range(300, 400)}, index=dates),
'C': pd.DataFrame(data={'col1': range(100), 'col2': range(400, 500)}, index=dates)
}
def convert(data, name):
data = pd.concat([
pd.DataFrame(data={symbol: [dict(zip(df.columns, v)) for v in df.values]},
index=df.index)
for symbol, df in data.items()
], axis=1, join='outer')
data['type'] = name
data.index.name = 'date'
return data
result = convert(data, name='system')
result.head()
A B C type
date
2010-05-18 {'col1': 97, 'col2': 297} {'col1': 97, 'col2': 397} {'col1': 97, 'col2': 497} system
2010-05-19 {'col1': 98, 'col2': 298} {'col1': 98, 'col2': 398} {'col1': 98, 'col2': 498} system
2010-05-20 {'col1': 99, 'col2': 299} {'col1': 99, 'col2': 399} {'col1': 99, 'col2': 499} system
Any help is greatly appreciated! Thank you.
The gist of this post is that I have "23" in my original data, and I want "23" in my resulting dict (not "23.0"). Here's how I've tried to handle it with Pandas.
My Excel worksheet has a coded Region column:
23
11
27
(blank)
25
Initially, I created a dataframe and Pandas set the dtype of Region to float64*
import pandas as pd
filepath = 'data_file.xlsx'
df = pd.read_excel(filepath, sheetname=0, header=0)
df
23.0
11.0
27.0
NaN
25.0
Pandas will convert the dtype to object if I use fillna() to replace NaN's with blanks which seems to eliminate the decimals.
df.fillna('', inplace=True)
df
23
11
27
(blank)
25
Except I still get decimals when I convert the dataframe to a dict:
data = df.to_dict('records')
data
[{'region': 23.0,},
{'region': 27.0,},
{'region': 11.0,},
{'region': '',},
{'region': 25.0,}]
Is there a way I can create the dict without the decimal places? By the way, I'm writing a generic utility, so I won't always know the column names and/or value types, which means I'm looking for a generic solution (vs. explicitly handling Region).
Any help is much appreciated, thanks!
The problem is that after fillna('') your underlying values are still float despite the column being of type object
s = pd.Series([23., 11., 27., np.nan, 25.])
s.fillna('').iloc[0]
23.0
Instead, apply a formatter, then replace
s.apply('{:0.0f}'.format).replace('nan', '').to_dict()
{0: '23', 1: '11', 2: '27', 3: '', 4: '25'}
Using a custom function, takes care of integers and keeps strings as strings:
import pprint
def func(x):
try:
return int(x)
except ValueError:
return x
df = pd.DataFrame({'region': [1, 2, 3, float('nan')],
'col2': ['a', 'b', 'c', float('nan')]})
df.fillna('', inplace=True)
pprint.pprint(df.applymap(func).to_dict('records'))
Output:
[{'col2': 'a', 'region': 1},
{'col2': 'b', 'region': 2},
{'col2': 'c', 'region': 3},
{'col2': '', 'region': ''}]
A variation that also keeps floats as floats:
import pprint
def func(x):
try:
if int(x) == x:
return int(x)
else:
return x
except ValueError:
return x
df = pd.DataFrame({'region1': [1, 2, 3, float('nan')],
'region2': [1.5, 2.7, 3, float('nan')],
'region3': ['a', 'b', 'c', float('nan')]})
df.fillna('', inplace=True)
pprint.pprint(df.applymap(func).to_dict('records'))
Output:
[{'region1': 1, 'region2': 1.5, 'region3': 'a'},
{'region1': 2, 'region2': 2.7, 'region3': 'b'},
{'region1': 3, 'region2': 3, 'region3': 'c'},
{'region1': '', 'region2': '', 'region3': ''}]
You could add: dtype=str
import pandas as pd
filepath = 'data_file.xlsx'
df = pd.read_excel(filepath, sheetname=0, header=0, dtype=str)