Obtaining a dictionary out of regular expressions - python

I have a question that includes various steps.
I am parsing a file that looks like this:
9
123
0 987
3 890 234 111
1 0 1 90 1 34 1 09 1 67
1 684321
2 352 69
1 1 1 243 1 198 1 678 1 11
2 098765
1 143
1 2 1 23 1 63 1 978 1 379
3 784658
1 43
1 3 1 546 1 789 1 12 1 098
I want to make this lines in the file, keys of a dictionary (ignoring the first number and just taking the second one, because it just indicates which number of key should be):
0 987
1 684321
2 098765
3 784658
And this lines, the values of the elements (ignoring only the first number too, because it just indicates how many elements are):
3 890 234 111
2 352 69
1 143
1 43
So at the end it has to look like this:
d = {987 : [890, 234, 111], 684321 : [352, 69],
098765 : [143], 784658 : [43]}
So far I have this:
findkeys = re.findall(r"\d\t(\d+)\n", line)
findelements = re.findall(r"\d\t(\d+)", line)
listss.append("".join(findelements))
d = {findkeys: listss}
The regular expressions need more exceptions because the one for the keys, it gives me the elements of other lines that I don't want them to be keys, but have just one number too. Like in the example of the file, the number 43 appears as a result.
And the regular expression of the elements gives me back all the lines.
I donĀ“t know if it will be easier to make that the code should ignore the lines of which I do not need information, but I don't know how to do that.
I want it to keep it has simple has possible.
Thanks!

with open('filename.txt') as f:
lines = f.readlines()
lines = [x.strip() for x in lines]
lines = lines[2:]
keys = lines[::3]
values = lines[1::3]
output lines:
['0 987',
'3 890 234 111',
'1 0 1 90 1 34 1 09 1 67',
'1 684321',
'2 352 69',
'1 1 1 243 1 198 1 678 1 11',
'2 098765',
'1 143',
'1 2 1 23 1 63 1 978 1 379',
'3 784658',
'1 43',
'1 3 1 546 1 789 1 12 1 098']
output keys:
['0 987', '1 684321', '2 098765', '3 784658']
output values:
['3 890 234 111', '2 352 69', '1 143', '1 43']
Now you just have to put it together ! Iterate through keys and values.

Once you have the lines in a list (lines variable), you can simply use re to isolate numbers and dictionary/list comprehension to build the desired data structure.
Based on you example data, every 3rd line is a key with values on the following line. This means you only need to stride by 3 in the list.
findall() will give you the list of numbers (as text) on each line and you can ignore the first one with simple subscripts.
import re
value = re.compile(r"(\d+)")
numbers = [ [int(v) for v in value.findall(line)] for line in lines]
intDict = { key[1]:values[1:] for key,values in zip(numbers[2::3],numbers[3::3]) }
You could also do it using split() but then you have to exclude empty entries that multiple spaces will create in the split:
numbers = [ [int(v) for v in line.split() if v != ""] for line in lines]
intDict = { key[1]:values[1:] for key,values in zip(numbers[2::3],numbers[3::3]) }

You could build yourself a parser with e.g. parsimonious:
from parsimonious.nodes import NodeVisitor
from parsimonious.grammar import Grammar
data = """
9
123
0 987
3 890 234 111
1 0 1 90 1 34 1 09 1 67
1 684321
2 352 69
1 1 1 243 1 198 1 678 1 11
2 098765
1 143
1 2 1 23 1 63 1 978 1 379
3 784658
1 43
1 3 1 546 1 789 1 12 1 098
"""
grammar = Grammar(
r"""
data = (important / garbage)+
important = keyline newline valueline
garbage = ~".*" newline?
keyline = ws number ws number
valueline = (ws number)+
newline = ~"[\n\r]"
number = ~"\d+"
ws = ~"[ \t]+"
"""
)
tree = grammar.parse(data)
class DataVisitor(NodeVisitor):
output = {}
current = None
def generic_visit(self, node, visited_children):
return node.text or visited_children
def visit_keyline(self, node, children):
key = node.text.split()[-1]
self.current = key
def visit_valueline(self, node, children):
values = node.text.split()
self.output[self.current] = [int(x) for x in values[1:]]
dv = DataVisitor()
dv.visit(tree)
print(dv.output)
This yields
{'987': [890, 234, 111], '684321': [352, 69], '098765': [143], '784658': [43]}
The idea here is that every "keyline" is only composed of two numbers with the second being the soon-to-be keyword. The next line is the valueline.

Related

Filtering dataframes based on one column with a different type of other column

I have the following problem
import pandas as pd
data = {
"ID": [420, 380, 390, 540, 520, 50, 22],
"duration": [50, 40, 45,33,19,1,3],
"next":["390;50","880;222" ,"520;50" ,"380;111" ,"810;111" ,"22;888" ,"11" ]
}
#load data into a DataFrame object:
df = pd.DataFrame(data)
print(df)
As you can see I have
ID duration next
0 420 50 390;50
1 380 40 880;222
2 390 45 520;50
3 540 33 380;111
4 520 19 810;111
5 50 1 22;888
6 22 3 11
Things to notice:
ID type is int
next type is a string with numbers separated by ; if more than two numbers
I would like to filter the rows with no next in the ID
For example in this case
420 has a follow up in both 390 and 50
380 has as next 880 and 222 both of which are not in ID so this one
540 has as next 380 and 111 and while 111 is not in ID, 380 is so not this one
same with 50
In the end I want to get
1 380 40 880;222
4 520 19 810;111
6 22 3 11
With only one value I used print(df[~df.next.astype(int).isin(df.ID)]) but in this case isin can not be simply applied.
How can I do this?
Let us try with split then explode with isin check
s = df.next.str.split(';').explode().astype(int)
out = df[~s.isin(df['ID']).groupby(level=0).any()]
Out[420]:
ID duration next
1 380 40 880;222
4 520 19 810;111
6 22 3 11
Use a regex with word boundaries for efficiency:
pattern = '|'.join(df['ID'].astype(str))
out = df[~df['next'].str.contains(fr'\b(?:{pattern})\b')]
Output:
ID duration next
1 380 40 880;222
4 520 19 810;111
6 22 3 11

Replace blank value in dataframe based on another column condition

I have many blanks in a merged data set and I want to fill them with a condition.
My current code looks like this
import pandas as pd
import csv
import numpy as np
pd.set_option('display.max_columns', 500)
# Read all files into pandas dataframes
Jan = pd.read_csv(r'C:\~\Documents\Jan.csv')
Feb = pd.read_csv(r'C:\~\Documents\Feb.csv')
Mar = pd.read_csv(r'C:\~\Documents\Mar.csv')
Jan=pd.DataFrame({'Department':['52','5','56','70','7'],'Item':['2515','254','818','','']})
Feb=pd.DataFrame({'Department':['52','56','765','7','40'],'Item':['2515','818','524','','']})
Mar=pd.DataFrame({'Department':['7','70','5','8','52'],'Item':['45','','818','','']})
all_df_list = [Jan, Feb, Mar]
appended_df = pd.concat(all_df_list)
df = appended_df
df.to_csv(r"C:\~\Documents\SallesDS.csv", index=False)
Data set:
df
Department Item
52 2515
5 254
56 818
70
7 50
52 2515
56 818
765 524
7
40
7 45
70
5 818
8
52
What I want is to fill the empty cells in Item with a correspondent values of the Department column.
So If Department is 52 and Item is empty it should be filled with 2515
Department 7 and Item is empty fill it with 45
and the result should look like this
df
Department Item
52 2515
5 254
56 818
70
7 50
52 2515
56 818
765 524
7 45
40
7 45
70
5 818
8
52 2515
I tried the following method but non of them worked.
1
df.loc[(df['Item'].isna()) & (df['Department'].str.contains(52)), 'Item'] = 2515
df.loc[(df['Item'].isna()) & (df['Department'].str.contains(7)), 'Item'] = 45
2
df["Item"] = df["Item"].fillna(df["Department"])
df = df.replace({"Item":{"52":"2515", "7":"45"}})
both ethir return error or do not work
Answer:
Hi I have used the below code and it worked
b = [52]
df.Item=np.where(df.Department.isin(b),df.Item.fillna(2515),df.Item)
a = [7]
df.Item=np.where(df.Department.isin(a),df.Item.fillna(45),df.Item)
Hope it helps someone who face the same issue
The following solution first creates a map of each department and it's maximum corresponding item (assuming there is one), and then matches that item to a department with a blank item. Note that in your data frame, the empty items are an empty string ("") and not NaN.
Create a map:
values = df.groupby('Department').max()
values['Item'] = values['Item'].apply(lambda x: np.nan if x == "" else x)
values = values.dropna().reset_index()
Department Item
0 5 818
1 52 2515
2 56 818
3 7 45
4 765 524
Then use df.apply():
df['Item'] = df.apply(lambda x: values[values['Department'] == x['Department']]['Item'].values if x['Item'] == "" else x['Item'], axis=1)
In this case, the new values will have brackets around them. They can be removed with str.replace():
df['Item'] = df['Item'].astype(str).str.replace(r'\[|\'|\'|\]', "", regex=True)
The result:
Department Item
0 52 2515
1 5 254
2 56 818
3 70
4 7 45
0 52 2515
1 56 818
2 765 524
3 7 45
4 40
0 7 45
1 70
2 5 818
3 8
4 52 2515
Hi I have used the below code and it worked
b = [52]
df.Item=np.where(df.Department.isin(b),df.Item.fillna(2515),df.Item)
a = [7]
df.Item=np.where(df.Department.isin(a),df.Item.fillna(45),df.Item)
Hope it helps someone who face the same issue

how to convert text file to Excel file , Without deleting the spaces between data

I want to convert text file to excel file, without deleting spaces for each line.
Note that the number of columns will be equal to all lines of the file.
the text file follows the following format:
First row
05100079 0000001502 5 01 2 070 1924 02 06 1994 C508 2 8500 3 8500 3 3 1 1 012 10 0 98 00 4 8 8 9 0 40 01 2 15 26000 1748 C508 116 102 3 09 98 013 1 1 0 1 10 10 0 09003 50060 50060 0 0 369 99 9 1 4 4 5 8 0 0181 1 80 00 01 0 9 9 8 1 0 00 00 020 0
second row
05100095 0000001502 2 01 2 059 1917 02 03 1977 C504 2 8500 3 8500 3 9 1 1 54-11-0999-00 2 9 0 90 01 2 12 26000 1744 C504 116 102 3 09 98 013 1 1 0 2 0 09011 50060 50060 0 36 9 9 1 9 9 5 8 0 3161 9 9 8 020 0 `
How to edit the code to convert text file to excel file without deleting the spaces between data?
This code below deletes the space in each line.
I mean to convert the file to Excel Sheet without any modification to the original file.
The spaces stay spaces and all other data stays the same format.
import xlwt
import xlrd
book = xlwt.Workbook()
ws = book.add_sheet('First Sheet') # Add a sheet
f = open('testval.txt', 'r+')
data = f.readlines() # read all lines at once
for i in range(len(data)):
row = data[i].split() # This will return a line of string data, you may need to convert to other formats depending on your use case`
for j in range(len(row)):
ws.write(i, j, row[j]) # Write to cell i, j
book.save('testval' + '.xls')
f.close()
Expected output:
Excel file in the same format as the original file"text"
If you have fixed-length fields, you need to split each line using index intervals.
For instance, you can do:
book = xlwt.Workbook()
ws = book.add_sheet('First Sheet') # Add a sheet
with io.open("testval.txt", mode="r", encoding="utf-8") as f:
for row_idx, row in enumerate(f):
row = row.rstrip()
ws.write(row_idx, 0, row[0:8])
ws.write(row_idx, 1, row[9:19])
ws.write(row_idx, 2, row[20:21])
ws.write(row_idx, 3, row[22:24])
# and so on...
book.save("sample.xlsx")
You get something like that:

few empty spaces in start of csv file before header data by python

In csv file if the line start with # sign or it is empty, I can remove or ignore it easily.
# some description here
# 1 is for good , 2 is bad and 3 for worse
empty line
I can deal by ignoring the empty line and line start with # by following logic in python.
while True:
if len(data[0]) == 0 or data[0][0][0] == '#':
data.pop(0)
else:
break
return data
But Below is header data but it has few empty spaces in start and then data is available
0 temp_data 1 temp_flow 2 temp_record 3 temp_all
22 33 434 344
34 43 434 355
In some files i got header data like below and then I had to ignore only # sign and not column names
#0 temp_data 1 temp_flow 2 temp_record 3 temp_all
22 33 434 344
34 43 434 355
But I get no clue how to deal with these two situation.
if someone help me. it would be grateful. because my above logic fails on these two situations.
You can use the string strip() function to remove leading and trailing whitespace first...
>>> ' 0 temp_data 1 temp_flow 2 temp_record 3 temp_all'.strip()
'0 temp_data 1 temp_flow 2 temp_record 3 temp_all'

Reading values from a text file with different row and column size in python

I have read other simliar posts but they don't seem to work in my case. Hence, I'm posting it newly here.
I have a text file which has varying row and column sizes. I am interested in the rows of values which have a specific parameter. E.g. in the sample text file below, I want the last two values of each line which has the number '1' in the second position. That is, I want the values '1, 101', '101, 2', '2, 102' and '102, 3' from the lines starting with the values '101 to 104' because they have the number '1' in the second position.
$MeshFormat
2.2 0 8
$EndMeshFormat
$Nodes
425
.
.
$EndNodes
$Elements
630
.
97 15 2 0 193 97
98 15 2 0 195 98
99 15 2 0 197 99
100 15 2 0 199 100
101 1 2 0 201 1 101
102 1 2 0 201 101 2
103 1 2 0 202 2 102
104 1 2 0 202 102 3
301 2 2 0 303 178 78 250
302 2 2 0 303 250 79 178
303 2 2 0 303 198 98 249
304 2 2 0 303 249 99 198
.
.
.
$EndElements
The problem is, with the code I have come up with mentioned below, it starts from '101' but it reads the values from the other lines upto '304' or more. What am I doing wrong or does someone has a better way to tackle this?
# Here, (additional_lines + anz_knoten_gmsh - 2) are additional lines that need to be skipped
# at the beginning of the .txt file. Initially I find out where the range
# of the lines lies which I need.
# The two_noded_elem_start is the first line having the '1' at the second position
# and four_noded_elem_start is the first line number having '2' in the second position.
# So, basically I'm reading between these two parameters.
input_file = open(os.path.join(gmsh_path, "mesh_outer_region.msh"))
output_file = open(os.path.join(gmsh_path, "mesh_skip_nodes.txt"), "w")
for i, line in enumerate(input_file):
if i == (additional_lines + anz_knoten_gmsh + two_noded_elem_start - 2):
break
for i, line in enumerate(input_file):
if i == additional_lines + anz_knoten_gmsh + four_noded_elem_start - 2:
break
elem_list = line.strip().split()
del elem_list[:5]
writer = csv.writer(output_file)
writer.writerow(elem_list)
input_file.close()
output_file.close()
*EDIT: The piece of code used to find the parameters like two_noded_elem_start is as follows:
# anz_elemente_ueberg_gmsh is another parameter that is found out
# from a previous piece of code and '$EndElements' is what
# is at the end of the text file "mesh_outer_region.msh".
input_file = open(os.path.join(gmsh_path, "mesh_outer_region.msh"), "r")
for i, line in enumerate(input_file):
if line.strip() == anz_elemente_ueberg_gmsh:
break
for i, line in enumerate(input_file):
if line.strip() == '$EndElements':
break
element_list = line.strip().split()
if element_list[1] == '1':
two_noded_elem_start = element_list[0]
two_noded_elem_start = int(two_noded_elem_start)
break
input_file.close()
>>> with open('filename') as fh: # Open the file
... for line in fh: # For each line the file
... values = line.split() # Split the values into a list
... if values[1] == '1': # Compare the second value
... print values[-2], values[-1] # Print the 2nd from last and last
1 101
101 2
2 102
102 3

Categories