How can I fill a dataframe from a recursive dictionary values? - python

I have created a script that allows me to read multiple pdf files and extract information recursively one by one. This script generates a dictionary with data by pdf.
Ex:
1º Iteration from 1º PDF file:
d = {"GGT":["transl","mut"], "ATT":["alt3"], "ATC":["alt5"], "AUC":["alteration"]}
2º In the Second Iteration from 2º PDF file:
d = {"GGT":["transl","mut"], "AUC":["alteration"]}
.
.
. Doing this until 200 pdf files.
Initially I have a dataframe created with all the genes that allow to detect that analysis.
df = pd.DataFrame(data=None, columns=["GGT","AUC","ATC","ATT","UUU","UUT"], dtype=None, copy=False)
Desire output:
What I would like to obtain is a dataframe where the information of the values is stored in a recursive way line by line.
For example:
Is there an easy way to implement this? or functions that can help me?

IIUC, you are trying to loop through the dictionaries and add them as rows in your dataframe? I'm not sure how this applies to recursion with "What I would like to obtain is a dataframe where the information of the values is stored in a recursive way line by line."
d1 = {"GGT":["transl","mut"], "ATT":["alt3"], "ATC":["alt5"], "AUC":["alteration"]}
d2 = {"GGT":["transl","mut"], "AUC":["alteration"]}
dicts = [d1, d2] #imagine this list contains the 200 dictionaries
df = pd.DataFrame(data=None, columns=["GGT","AUC","ATC","ATT","UUU","UUT"], dtype=None, copy=False)
for d in dicts: #since only 200 rows a simple loop with append
df = df.append(d, ignore_index=True)
df
Out[1]:
GGT AUC ATC ATT UUU UUT
0 [transl, mut] [alteration] [alt5] [alt3] NaN NaN
1 [transl, mut] [alteration] NaN NaN NaN NaN

Related

Convert heavily nested json file into R/Python dataframe

I have found a numerous number of similar questions on stackoverflow, however, one issue remains unsolved to me. I have a heavily nested “.json” file I need to import and convert into R or Python data.frame to work with. Json file contains lists inside (usually empty but sometime contains data). Example of json's structure:
I use R's library jsonlite and Python's pandas.
# R
jsonlite::fromJSON(json_file, flatten = TRUE)
# or
jsonlite::read_json(json_file, simplifyVector = T)
# Python
with open(json_file.json, encoding = "utf-8") as f:
data = json.load(f)
pd.json_normalize(data)
Generally, in both cases it work. The output looks like a normal data.frame, however, the problem is that some columns of a new data.frame contain an embedded lists (I am not sure about "embedded lists" whether it's correct and clear). Seems that both Pandas and jsonlite combined each list into single column, which is clearly seen in the screens below.
R
Python
As you might see some columns, such as wymagania.wymaganiaKonieczne.wyksztalcenia is nothing but a vector contains a combined/embedded list, i.e. content of a list has been combined into single column.
As a desired output I want to split each element of such lists as a single column of a data.frame. In other words, I want to obtain normal “in tidy sense” data.frame without any nested either data.frames and lists. Both R and Python codes are appreciated.
Minimum reproducible example:
[
{
"warunkiPracyIPlacy":{"miejscePracy":"abc","rodzajObowiazkow":"abc","zakresObowiazkow":"abc","rodzajZatrudnienia":"abc","kodRodzajuZatrudnienia":"abc","zmianowosc":"abc"},
"wymagania":{
"wymaganiaKonieczne":{
"zawody":[],
"wyksztalcenia":["abc"],
"wyksztalceniaSzczegoly":[{"kodPoziomuWyksztalcenia":"RPs002|WY","kodTypuWyksztalcenia":"abc"}],
"jezyki":[],
"jezykiSzczegoly":[],
"uprawnienia":[]},
"wymaganiaPozadane":{
"zawody":[],
"zawodySzczegoly":[],
"staze":[]},
"wymaganiaDodatkowe":{"zawody":[],"zawodySzczegoly":[]},
"inneWymagania":"abc"
},
"danePracodawcy":{"pracodawca":"abc","nip":"abc","regon":"abc","branza":null},
"pozostaleDane":{"identyfikatorOferty":"abc","ofertaZgloszonaPrzez":"abc","ofertaZgloszonaPrzezKodJednostki":"abc"},
"typOferty":"abc",
"typOfertyNaglowek":"abc",
"rodzajOferty":["DLA_ZAREJESTROWANYCH"],"staz":false,"link":false}
]
This is an answer for Python. It is not very elegant, but I think it will do for your purpose.
I have called your example file nested_json.json
import json
import pandas as pd
json_file = "nested_json.json"
with open(json_file, encoding="utf-8") as f:
data = json.load(f)
df = pd.json_normalize(data)
df_exploded = df.apply(lambda x: x.explode()).reset_index(drop=True)
# check based on first row whether its of type dict
columns_dict = df_exploded.columns[df_exploded.apply(lambda x: isinstance(x[0], dict))]
# append the splitted dict to the dataframe
for col in columns_dict:
df_splitted_dict = df_exploded[col].apply(pd.Series)
df_exploded = pd.concat([df_exploded, df_splitted_dict], axis=1)
This leads to a rectangular dataframe
>>> df_exploded.T
0
typOferty abc
typOfertyNaglowek abc
rodzajOferty DLA_ZAREJESTROWANYCH
staz False
link False
warunkiPracyIPlacy.miejscePracy abc
warunkiPracyIPlacy.rodzajObowiazkow abc
warunkiPracyIPlacy.zakresObowiazkow abc
warunkiPracyIPlacy.rodzajZatrudnienia abc
warunkiPracyIPlacy.kodRodzajuZatrudnienia abc
warunkiPracyIPlacy.zmianowosc abc
wymagania.wymaganiaKonieczne.zawody NaN
wymagania.wymaganiaKonieczne.wyksztalcenia abc
wymagania.wymaganiaKonieczne.wyksztalceniaSzcze... {'kodPoziomuWyksztalcenia': 'RPs002|WY', 'kodT...
wymagania.wymaganiaKonieczne.jezyki NaN
wymagania.wymaganiaKonieczne.jezykiSzczegoly NaN
wymagania.wymaganiaKonieczne.uprawnienia NaN
wymagania.wymaganiaPozadane.zawody NaN
wymagania.wymaganiaPozadane.zawodySzczegoly NaN
wymagania.wymaganiaPozadane.staze NaN
wymagania.wymaganiaDodatkowe.zawody NaN
wymagania.wymaganiaDodatkowe.zawodySzczegoly NaN
wymagania.inneWymagania abc
danePracodawcy.pracodawca abc
danePracodawcy.nip abc
danePracodawcy.regon abc
danePracodawcy.branza None
pozostaleDane.identyfikatorOferty abc
pozostaleDane.ofertaZgloszonaPrzez abc
pozostaleDane.ofertaZgloszonaPrzezKodJednostki abc
kodPoziomuWyksztalcenia RPs002|WY
kodTypuWyksztalcenia abc

How can I take Data from files without pandas in Python?

I have two text files as codes.txt and values.txt. Code file has categorical values and values.txt file has numerical values for the corresponding category. Continuous categories are presumed to be one segment. An example given in the figure below. Data points between 3 and 11 are all categories of “H” and this forms one segment.
I want to write a function which takes these two files (code.txt and values.txt) and return a dictionary as the output. Dictionary should have a key for each category. After that, I need to provide a new dictionary where keys are segment id’s for each category key. I have to provide a dictionary whose keys are for each segment id. I cannot use pandas and numpy for this.
After all it should like this:
Sample Input Data (values.txt)
After getting together, they will look like:
1
0.55147
H 0.76923
H 0.131979
H 0.503175
T 0.867538
T 0.123256
code.txt
A
A
B
B
C
C
B
C
A
C
B
A
B
A
A
C
values.txt
1.00
2.89
3.46
3.5443
343.234
3535.35235
253415.3512
561.343
0.544534
222.453
213.5525
4532.3435
3541.134
55.31314
341.3143
131.4534
Complete code
codes=[]
values=[]
with open(r"values.txt") as file:
for line in file:
values.append(float(line.replace("\n","")))
with open(r"code.txt") as file:
for line in file:
codes.append(line.replace("\n",""))
# print(codes)
# print(values)
dictt={}
for i in set(codes):
dictt[i]={"values":[],"mean":"","length":""}
for i in range(len(codes)):
dictt[codes[i]]["values"].append(values[i])
for key,value in dictt.items():
dictt[key]["length"]=len(value["values"])
dictt[key]["mean"]=sum(value["values"])/len(value["values"])
print(dictt)
OUTPUT

Import multiple excel files, create a column and get values from excel file's name

I need to upload multiple excel files - each one has a name of starting date. Eg. "20190114".
Then I need to append them in one DataFrame.
For this, I use the following code:
all_data = pd.DataFrame()
for f in glob.glob('C:\\path\\*.xlsx'):
df = pd.read_excel(f)
all_data = all_data.append(df,ignore_index=True)
In fact, I do not need all data, but filtered by multiple columns.
Then, I would like to create an additional column ('from') with values of file name (which is "date") for each respective file.
Example:
Data from the excel file, named '20190101'
Data from the excel file, named '20190115'
The final dataframe must have values in 'price' column not equal to '0' and in code column - with code='r' (I do not know if it's possible to export this data already filtered, avoiding exporting huge volume of data?) and then I need to add a column 'from' with the respective date coming from file's name:
like this:
dataframes for trial:
import pandas as pd
df1 = pd.DataFrame({'id':['id_1', 'id_2','id_3', 'id_4','id_5'],
'price':[0,12.5,17.5,24.5,7.5],
'code':['r','r','r','c','r'] })
df2 = pd.DataFrame({'id':['id_1', 'id_2','id_3', 'id_4','id_5'],
'price':[7.5,24.5,0,149.5,7.5],
'code':['r','r','r','c','r'] })
IIUC, you can filter necessary rows ,then concat, for file name you can use os.path.split() and access the filename with string slicing:
l=[]
for f in glob.glob('C:\\path\\*.xlsx'):
df=pd.read_excel(f)
df['from']=os.path.split(f)[1][:-5]
l.append(df[(df['code'].eq('r')&df['price'].ne(0))])
pd.concat(l,ignore_index=True)
id price code from
0 id_2 12.5 r 20190101
1 id_3 17.5 r 20190101
2 id_5 7.5 r 20190101
3 id_1 7.5 r 20190115
4 id_2 24.5 r 20190115
5 id_5 7.5 r 20190115

concatenating and saving multiple pair of CSV in pandas

I am a beginner in python. I have a hundred pair of CSV file. The file looks like this:
25_13oct_speed_0.csv
26_13oct_speed_0.csv
25_13oct_speed_0.1.csv
26_13oct_speed_0.1.csv
25_13oct_speed_0.2.csv
26_13oct_speed_0.2.csv
and others
I want to concatenate the pair files between 25 and 26 file. each pair of the file has a speed threshold (Speed_0, 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0,1.1,1.2,1.3,1.4,1.5,1.6,1.7,1.8,1.9,2.0) which is labeled on the file name. These files have the same structure data.
Mac Annotation X Y
A first 0 0
A last 0 0
B first 0 0
B last 0 0
Therefore, concatenate analyze is enough to join these two data. I use this method:
df1 = pd.read_csv('25_13oct_speed_0.csv')
df2 = pd.read_csv('26_13oct_speed_0.csv')
frames = [df1, df2]
result = pd.concat(frames)
for each pair files. but it takes time and not an elegant way. is there a good way to combine automatically the pair file and save simultaneously?
Idea is create DataFrame by list of files and add 2 new columns by Series.str.split by first _:
print (files)
['25_13oct_speed_0.csv', '26_13oct_speed_0.csv',
'25_13oct_speed_0.1.csv', '26_13oct_speed_0.1.csv',
'25_13oct_speed_0.2.csv', '26_13oct_speed_0.2.csv']
df1 = pd.DataFrame({'files': files})
df1[['g','names']] = df1['files'].str.split('_', n=1, expand=True)
print (df1)
files g names
0 25_13oct_speed_0.csv 25 13oct_speed_0.csv
1 26_13oct_speed_0.csv 26 13oct_speed_0.csv
2 25_13oct_speed_0.1.csv 25 13oct_speed_0.1.csv
3 26_13oct_speed_0.1.csv 26 13oct_speed_0.1.csv
4 25_13oct_speed_0.2.csv 25 13oct_speed_0.2.csv
5 26_13oct_speed_0.2.csv 26 13oct_speed_0.2.csv
Then loop per groups per column names, loop by groups with DataFrame.itertuples and create new DataFrame with read_csv, if necessary add new column filled by values from g, append to list, concat and last cave to new file by name from column names:
for i, g in df1.groupby('names'):
out = []
for n in g.itertuples():
df = pd.read_csv(n.files).assign(source=n.g)
out.append(df)
dfbig = pd.concat(out, ignore_index=True)
print (dfbig)
dfbig.to_csv(g['names'].iat[0])

Dictionary of dataframes not saving all

I've been trying to create a dictionary of data frames so I can store data coming from different files. I have one dataframe created in the following loop, and I would like to aggregate them to have each dataframe to the dictionary. I will have to join them later by the date.
d = {}
for num in range(3,14):
nodeName = "rgs" + str(num).zfill(2) #The key should be the nodeName
# Bunch of stuff to get the data ...
# Fill dataframe
data = {'date':date_list, 'users':users_list}
df = pd.DataFrame(data)
df = df.convert_objects(convert_numeric=True)
df = df.dropna(subset=['users'])
df['users'] = df['users'].astype(int)
d = {nodeName:df}
print d
The problem that I have is, if I print the dictionary out of the loop I only have one item, the last one.
{'rgs13': date users
0 2016-01-18 1
1 2016-01-19 1
2 2016-01-20 1
3 2016-01-21 1
4 2016-01-22 1
5 2016-01-23 1
6 2016-01-24 0
But I can clearly see that I can generate all the dataframes without problems inside the loop. How can I make the dictionary to keep all the df's? What am I doing wrong?
Thanks for the help.
It's because in the end you are re-defining d.
What you want is this:
d = {}
for num in range(3,14):
nodeName = "rgs" + str(num).zfill(2) #The key should be the nodeName
# Bunch of stuff to get the data ...
# Fill dataframe
data = {'date':date_list, 'users':users_list}
df = pd.DataFrame(data)
df = df.convert_objects(convert_numeric=True)
df = df.dropna(subset=['users'])
df['users'] = df['users'].astype(int)
d[nodeName] = df
print d
Instead of d = {nodeName:df} use
d[nodeName] = df
Since this adds a key/value pair to d whereas d = {nodeName:df} reassigns d to a new dict (with only the one key/value pair). Doing that in a loop spells death to all the previous key/value pairs.
You may find Ned Batchelder's Facts and myths about Python names and values a useful read. It will give you the right mental model for thinking about the relationship between variable names and values, and help you see what statements modify values (e.g. d[nodeName] = df) versus reassign variable names (e.g. d = {nodeName:df}).

Categories