Find specific string and output the whole line Python - python

I am having the following input txt file:
17,21.01.2019,0,0,0,0,E,75,meter tamper alarm
132,22.01.2019,64,296,225,996,A,,
150,23.01.2019,63,353,351,805,A,,
213,24.01.2019,64,245,244,970,A,,
201,25.01.2019,86,297,364,943,A,,
56,26.01.2019,73,678,678,1437,A,,
201,27.01.2019,83,654,517,1212,A,,
117,28.01.2019,58,390,202,816,A,,
69,29.01.2019,89,354,282,961,C,,
123,30.01.2019,53,267,206,852,A,,
Need to make a python program that can parse through the file.
I need to find all the lines not containing A or C and output those lines in a new file.
I'm completely stuck after trying several regex :( can you help me ?

Try
with open('filename') as f:
for line in f.readlines():
if 'A' not in line or 'C' not in line:
print(line)
OR Better, as your file content seems to resemble a csv(Comma Seperated Values) format, use pandas for better manipulations
Read the file
import pandas as pd
df = pd.read_csv('filename', header=None, sep=',')
0 1 2 3 4 5 6 7 8
0 17 21.01.2019 0 0 0 0 E 75.0 meter tamper alarm
1 132 22.01.2019 64 296 225 996 A NaN NaN
2 150 23.01.2019 63 353 351 805 A NaN NaN
3 213 24.01.2019 64 245 244 970 A NaN NaN
4 201 25.01.2019 86 297 364 943 A NaN NaN
5 56 26.01.2019 73 678 678 1437 A NaN NaN
6 201 27.01.2019 83 654 517 1212 A NaN NaN
7 117 28.01.2019 58 390 202 816 A NaN NaN
8 69 29.01.2019 89 354 282 961 C NaN NaN
9 123 30.01.2019 53 267 206 852 A NaN NaN
Output
print(df[~df[6].str.contains('A|C', regex=True)])
0 1 2 3 4 5 6 7 8
0 17 21.01.2019 0 0 0 0 E 75.0 meter tamper alarm

Try:
with open(r'file.txt', 'r') as f:
for line in f:
if 'A' not in line or 'C' not in line:
print(line)

Related

concat result of apply in python

I am trying to apply a function on a column of a dataframe.
After getting multiple results as dataframes, I want to concat them all in one.
Why does the first option work and the second not?
import numpy as np
import pandas as pd
def testdf(n):
test = pd.DataFrame(np.random.randint(0,n*100,size=(n*3, 3)), columns=list('ABC'))
test['index'] = n
return test
test = pd.DataFrame({'id': [1,2,3,4]})
testapply = test['id'].apply(func = testdf)
#option 1
pd.concat([testapply[0],testapply[1],testapply[2],testapply[3]])
#option2
pd.concat([testapply])
pd.concat expects a sequence of pandas objects, but your #2 case/option passes a sequence of single pd.Series object that contains multiple dataframes, so it doesn't make concatenation - you just get that series as is.To fix your 2nd approach use unpacking:
print(pd.concat([*testapply]))
A B C index
0 91 15 91 1
1 93 85 91 1
2 26 87 74 1
0 195 103 134 2
1 14 26 159 2
2 96 143 9 2
3 18 153 35 2
4 148 146 130 2
5 99 149 103 2
0 276 150 115 3
1 232 126 91 3
2 37 242 234 3
3 144 73 81 3
4 96 153 145 3
5 144 94 207 3
6 104 197 49 3
7 0 93 179 3
8 16 29 27 3
0 390 74 379 4
1 78 37 148 4
2 350 381 260 4
3 279 112 260 4
4 115 387 173 4
5 70 213 378 4
6 43 37 149 4
7 240 399 117 4
8 123 0 47 4
9 255 172 1 4
10 311 329 9 4
11 346 234 374 4

How can I turn this into a DataFrame?

I am new to Python, and was trying to run a basic web scraper. My code looks like this
import requests
import pandas as pd
x = requests.get('https://www.baseball-reference.com/players/p/penaje02.shtml')
dfs = pd.read_html(x.content)
print(dfs)
df = pd.DataFrame(dfs)
when printing dfs it looks like this. I only want the second table.
[ Year Age Tm Lg G PA AB \
0 2018 20 HOU-min A- 36 156 136
1 2019 21 HOU-min A,A+ 109 473 409
2 2021 23 HOU-min AAA,Rk 37 160 145
3 2022 24 HOU AL 136 558 521
4 1 Yr 1 Yr 1 Yr 1 Yr 136 558 521
5 162 Game Avg. 162 Game Avg. 162 Game Avg. 162 Game Avg. 162 665 621
R H 2B ... OPS OPS+ TB GDP HBP SH SF IBB Pos \
0 22 34 5 ... 0.649 NaN 42 0 1 0 1 0 NaN
1 72 124 21 ... 0.825 NaN 180 4 11 0 6 0 NaN
2 25 43 5 ... 0.942 NaN 84 0 7 0 0 0 NaN
3 72 132 20 ... 0.715 101.0 222 5 6 1 5 0 *6/H
4 72 132 20 ... 0.715 101.0 222 5 6 1 5 0 NaN
5 86 157 24 ... 0.715 101.0 264 6 7 1 6 0 NaN
Awards
0 TRC · NYPL
1 DAV,FAY · MIDW,CARL
2 SKT,AST · AAAW,FCL
3 GG
4 NaN
5 NaN
[6 rows x 30 columns]]
however, i end up with error Must pass 2-d input. shape=(1, 6, 30) after my last line. I have tried using df=dfs[1], but got the error list index our of range. Any way i can turn dfs from a list to a datframe?
What do you mean you only want the second table? There's only one table, it's 6 rows and 30 columns. The backslashes show up when whatever you're trying to print to isn't wide enough to contain the dataframe without line wrapping. Here's the dataframe printed in a wider terminal:
The pd.read_html() function returns a List[DataFrame] so you first need to grab your dataframe from the list, and then you can subset it to get the columns you care about:
df = dfs[0]
columns = ['R', 'H', '2B', '3B', 'HR', 'RBI', 'SB', 'CS', 'BB', 'SO', 'BA', 'OBP', 'SLG', 'OPS', 'OPS+', 'TB', 'GDP', 'HBP', 'SH', 'SF', 'IBB', 'Pos']
print(df[columns])
Output:
R H 2B 3B HR RBI SB CS BB SO BA OBP SLG OPS OPS+ TB GDP HBP SH SF IBB Pos
0 22 34 5 0 1 10 3 0 18 19 0.250 0.340 0.309 0.649 NaN 42 0 1 0 1 0 NaN
1 72 124 21 7 7 54 20 10 47 90 0.303 0.385 0.440 0.825 NaN 180 4 11 0 6 0 NaN
2 25 43 5 3 10 21 6 1 8 41 0.297 0.363 0.579 0.942 NaN 84 0 7 0 0 0 NaN
3 72 132 20 2 22 63 11 2 22 135 0.253 0.289 0.426 0.715 101.0 222 5 6 1 5 0 *6/H
4 72 132 20 2 22 63 11 2 22 135 0.253 0.289 0.426 0.715 101.0 222 5 6 1 5 0 NaN
5 86 157 24 2 26 75 13 2 26 161 0.253 0.289 0.426 0.715 101.0 264 6 7 1 6 0 NaN

Join/ Merge 2 Comma Separated Text Files

I would really appreciate assistance in developing Python/ Pandas code to left-join 2 separate CSV files. I'm very new to Python so unclear how to begin. I have used Excel plugins to achieve same but it would normally hang or take hours to complete due to the huge amount of records being processed.
Scenario: joining on the first column.
CSV1
AN DNM OCD TRI
1 343 5656 90
2 344 5657 91
3 345 5658 92
4 346 5659 93
CSV2
AN2 STATE PLAN
4 3 19
3 2 35
7 3 19
8 3 19
Result inclusive of a match status if possible:
AN DNM OCD TRI STATE PLAN Join Status
1 343 5656 90 No_match
2 344 5657 91 No_match
3 345 5658 92 2 35 Match
4 346 5659 93 3 19 Match
All help appreciated.
You can use .merge with indicator= parameter:
out = df1.merge(
df2, left_on="AN", right_on="AN2", indicator="Join Status", how="left"
)
out = out.drop(columns="AN2")
out["Join Status"] = out["Join Status"].map(
{"left_only": "No_match", "both": "Match"}
)
print(out)
Prints:
AN DNM OCD TRI STATE PLAN Join Status
0 1 343 5656 90 NaN NaN No_match
1 2 344 5657 91 NaN NaN No_match
2 3 345 5658 92 2.0 35.0 Match
3 4 346 5659 93 3.0 19.0 Match
Let's assume you have df1 and df2 and you want to merge the two dataframe
df = df1.merge(df2, how='left', left_on='AN', right_on='AN2')
I hope this will help you

python pandas how to read csv file by block

I'm trying to read a CSV file, block by block.
CSV looks like:
No.,time,00:00:00,00:00:01,00:00:02,00:00:03,00:00:04,00:00:05,00:00:06,00:00:07,00:00:08,00:00:09,00:00:0A,...
1,2021/09/12 02:16,235,610,345,997,446,130,129,94,555,274,4,
2,2021/09/12 02:17,364,210,371,341,294,87,179,106,425,262,3,
1434,2021/09/12 02:28,269,135,372,262,307,73,86,93,512,283,4,
1435,2021/09/12 02:29,281,207,688,322,233,75,69,85,663,276,2,
No.,time,00:00:10,00:00:11,00:00:12,00:00:13,00:00:14,00:00:15,00:00:16,00:00:17,00:00:18,00:00:19,00:00:1A,...
1,2021/09/12 02:16,255,619,200,100,453,456,4,19,56,23,4,
2,2021/09/12 02:17,368,21,37,31,24,8,19,1006,4205,2062,30,
1434,2021/09/12 02:28,2689,1835,3782,2682,307,743,256,741,52,23,6,
1435,2021/09/12 02:29,2281,2047,6848,3522,2353,755,659,885,6863,26,36,
Blocks start with No., and data rows follow.
def run(sock, delay, zipobj):
zf = zipfile.ZipFile(zipobj)
for f in zf.namelist():
print(zf.filename)
print("csv name: ", f)
df = pd.read_csv(zf.open(f), skiprows=[0,1,2,3,4,5] #,"nrows=1435? (but for the next blocks?")
print(df, '\n')
date_pattern='%Y/%m/%d %H:%M'
df['epoch'] = df.apply(lambda row: int(time.mktime(time.strptime(row.time,date_pattern))), axis=1) # create epoch as a column
tuples=[] # data will be saved in a list
formated_str='perf.type.serial.object.00.00.00.TOTAL_IOPS'
for each_column in list(df.columns)[2:-1]:
for e in zip(list(df['epoch']),list(df[each_column])):
each_column=each_column.replace("X", '')
#print(f"perf.type.serial.LDEV.{each_column}.TOTAL_IOPS",e)
tuples.append((f"perf.type.serial.LDEV.{each_column}.TOTAL_IOPS",e))
package = pickle.dumps(tuples, 1)
size = struct.pack('!L', len(package))
sock.sendall(size)
sock.sendall(package)
time.sleep(delay)
Many thanks for help,
Load your file with pd.read_csv and create block at each time the row of your first column is No.. Use groupby to iterate over each block and create a new dataframe.
data = pd.read_csv('data.csv', header=None)
dfs = []
for _, df in data.groupby(data[0].eq('No.').cumsum()):
df = pd.DataFrame(df.iloc[1:].values, columns=df.iloc[0])
dfs.append(df.rename_axis(columns=None))
Output:
# First block
>>> dfs[0]
No. time 00:00:00 00:00:01 00:00:02 00:00:03 00:00:04 00:00:05 00:00:06 00:00:07 00:00:08 00:00:09 00:00:0A ...
0 1 2021/09/12 02:16 235 610 345 997 446 130 129 94 555 274 4 NaN
1 2 2021/09/12 02:17 364 210 371 341 294 87 179 106 425 262 3 NaN
2 1434 2021/09/12 02:28 269 135 372 262 307 73 86 93 512 283 4 NaN
3 1435 2021/09/12 02:29 281 207 688 322 233 75 69 85 663 276 2 NaN
# Second block
>>> dfs[1]
No. time 00:00:10 00:00:11 00:00:12 00:00:13 00:00:14 00:00:15 00:00:16 00:00:17 00:00:18 00:00:19 00:00:1A ...
0 1 2021/09/12 02:16 255 619 200 100 453 456 4 19 56 23 4 NaN
1 2 2021/09/12 02:17 368 21 37 31 24 8 19 1006 4205 2062 30 NaN
2 1434 2021/09/12 02:28 2689 1835 3782 2682 307 743 256 741 52 23 6 NaN
3 1435 2021/09/12 02:29 2281 2047 6848 3522 2353 755 659 885 6863 26 36 NaN
and so on.
Sorry, i don't find a correct way with your code:
def run(sock, delay, zipobj):
zf = zipfile.ZipFile(zipobj)
for f in zf.namelist():
print("using zip :", zf.filename)
str = f
myobject = re.search(r'(^[a-zA-Z]{4})_.*', str)
Objects = myobject.group(1)
if Objects == 'LDEV':
metric = re.search('.*LDEV_(.*)/.*', str)
metric = metric.group(1)
elif Objects == 'Port':
metric = re.search('.*/(Port_.*).csv', str)
metric = metric.group(1)
else:
print("None")
print("using csv : ", f)
#df = pd.read_csv(zf.open(f), skiprows=[0,1,2,3,4,5])
data = pd.read_csv(zf.open(f), header=None, skiprows=[0,1,2,3,4,5])
dfs = []
for _, df in data.groupby(data[0].eq('No.').cumsum()):
df = pd.DataFrame(df.iloc[1:].values, columns=df.iloc[0])
dfs.append(df.rename_axis(columns=None))
print("here")
date_pattern='%Y/%m/%d %H:%M'
df['epoch'] = df.apply(lambda row: int(time.mktime(time.strptime(row.time,date_pattern))), axis=1) # create epoch as a column
tuples=[] # data will be saved in a list
#formated_str='perf.type.serial.object.00.00.00.TOTAL_IOPS'
for each_column in list(df.columns)[2:-1]:
for e in zip(list(df['epoch']),list(df[each_column])):
each_column=each_column.replace("X", '')
tuples.append((f"perf.type.serial.{Objects}.{each_column}.{metric}",e))
package = pickle.dumps(tuples, 1)
size = struct.pack('!L', len(package))
sock.sendall(size)
sock.sendall(package)
time.sleep(delay)
thanks for your help,

Python - Aggregate on week of the month basis and compare

I am working on a small csv data set where the values are indexed as per Week of the Month occurrence. What I want is to aggregate all of the weeks in sequence, barring the current week or the last column, to compute weekly average of the remaining data (average for ...10/1 + 11/1 + 12/1.. to get week 1 data).
The data is available in this format:
char 2019/11/1 2019/11/2 2019/11/3 2019/11/4 2019/11/5 2019/12/1 2019/12/2 2019/12/3 2019/12/4 2019/12/5 2020/1/1
A 1477 1577 1401 773 310 1401 1464 1417 909 712 289
B 1684 1485 1220 894 297 1618 1453 1335 920 772 275
C 37 10 1 3 6 17 6 6 3 2 1
D 2041 1883 1302 1136 376 2175 1729 1167 960 745 278
E 6142 5991 5499 3883 1036 4949 6187 5760 3974 2339 826
F 842 846 684 462 140 789 802 134 386 251 94
This column (2020/1/1) shall later be used to compare with the mean of all aggregate values from week one. The desired output is something like this:
char W1 W2 W3 W4 W5 2020/1/1
A 1439 1520.5 1409 841 511 289
B 1651 1469 1277.5 907 534.5 275
C 27 8 3.5 3 4 1
D 2108 1806 1234.5 1048 560.5 278
E 5545.5 6089 5629.5 3928.5 1687.5 826
F 815.5 824 409 424 195.5 94
Is it possible to use rolling or resample in such a case? Any ideas on how to do it?
I beleive you need DataFrame.resample by weeks:
df = df.set_index(['char', '2020/1/1'])
df.columns = pd.to_datetime(df.columns, format='%Y/%m/%d')
df = df.resample('W', axis=1).mean()
print (df)
2019-11-03 2019-11-10 2019-11-17 2019-11-24 2019-12-01 \
char 2020/1/1
A 289 1485.000000 541.5 NaN NaN 1401.0
B 275 1463.000000 595.5 NaN NaN 1618.0
C 1 16.000000 4.5 NaN NaN 17.0
D 278 1742.000000 756.0 NaN NaN 2175.0
E 826 5877.333333 2459.5 NaN NaN 4949.0
F 94 790.666667 301.0 NaN NaN 789.0
2019-12-08
char 2020/1/1
A 289 1125.50
B 275 1120.00
C 1 4.25
D 278 1150.25
E 826 4565.00
F 94 393.25
EDIT: If want grouping first 7 days per each month to separate groups use:
df = df.set_index(['char', '2020/1/1'])
c = pd.to_datetime(df.columns, format='%Y/%m/%d')
df.columns = [f'{y}/{m}/W{w}' for w,m,y in zip((c.day - 1) // 7 + 1,c.month, c.year)]
df = df.groupby(df.columns, axis=1).mean()
print (df)
2019/11/W1 2019/12/W1
char 2020/1/1
A 289 1107.6 1180.6
B 275 1116.0 1219.6
C 1 11.4 6.8
D 278 1347.6 1355.2
E 826 4510.2 4641.8
F 94 594.8 472.4
EDIT1: For grouping dy years and days use DatetimeIndex.strftime:
df = df.set_index(['char', '2020/1/1'])
df.columns = pd.to_datetime(df.columns, format='%Y/%m/%d').strftime('%d-%Y')
df = df.groupby(df.columns, axis=1).mean()
print (df)
01-2019 02-2019 03-2019 04-2019 05-2019
char 2020/1/1
A 289 1439.0 1520.5 1409.0 841.0 511.0
B 275 1651.0 1469.0 1277.5 907.0 534.5
C 1 27.0 8.0 3.5 3.0 4.0
D 278 2108.0 1806.0 1234.5 1048.0 560.5
E 826 5545.5 6089.0 5629.5 3928.5 1687.5
F 94 815.5 824.0 409.0 424.0 195.5
Here is a way using groupby:
m= df.set_index(['char', '2020/1/1']).rename(columns=lambda x: pd.to_datetime(x))
m.groupby(m.columns.week,axis=1).mean().add_prefix('W_').reset_index()
char 2020/1/1 W_44 W_45 W_48 W_49
0 A 289 1485.000000 541.5 1401.0 1125.50
1 B 275 1463.000000 595.5 1618.0 1120.00
2 C 1 16.000000 4.5 17.0 4.25
3 D 278 1742.000000 756.0 2175.0 1150.25
4 E 826 5877.333333 2459.5 4949.0 4565.00
5 F 94 790.666667 301.0 789.0 393.25

Categories