I have txt file that contains accelerometer data and I would like to parse this file into columns.
Below is the data, the problem is that I want only these values as columns
(X value, Y value, Z value, time diff in ms) and I want to remove the headers and footers of the file.
# Accelerometer Values
# filename: default__3.txt
# Saving start time: Sat Nov 15 11:09:33 GMT+03:30 2014
# sensor resolution: 0.1m/s^2
#Sensorvendor: ST Microelectronic, name: ST accelerometer, type: 1,version : 104, range 16.0
# X value, Y value, Z value, time diff in ms
-3.236 -4.726 8.982 1
-3.206 -4.716 8.884 10
-3.187 -4.716 8.816 10
-3.138 -4.716 8.757 10
-3.138 -4.746 8.757 1
-3.059 -4.815 8.816 9
-3.059 -4.864 8.825 10
-3.069 -5.021 8.865 10
-3.069 -4.903 8.865 1
-3.089 -4.864 8.924 9
-3.108 -4.903 9.051 13
-3.157 -4.903 9.247 8
-3.206 -4.893 9.404 9
-3.275 -4.883 9.581 11
-3.314 -4.726 9.62 10
-3.314 -4.805 9.62 1
-3.324 -4.756 9.512 9
-3.324 -4.667 9.335 11
-3.246 -4.589 9.247 9
-3.177 -4.56 9.041 11
-3.02 -4.56 8.855 9
-3.128 -4.54 8.855 1
-3.098 -4.628 8.708 10
-3.098 -4.628 8.62 9
-3.02 -4.687 8.62 1
-3.02 -4.687 8.541 9
-2.991 -4.775 8.541 1
-2.961 -4.805 8.512 10
# end
#Sat Nov 15 11:10:36 GMT+03:30 2014
Don't reinvent the wheel. Load and process the data with pandas:
>>> import pandas as pd
>>> data = pd.read_csv('data.txt', sep=' ', header=None, comment='#')
>>> data
0 1 2 3
0 -3.236 -4.726 8.982 1
1 -3.206 -4.716 8.884 10
2 -3.187 -4.716 8.816 10
3 -3.138 -4.716 8.757 10
4 -3.138 -4.746 8.757 1
5 -3.059 -4.815 8.816 9
6 -3.059 -4.864 8.825 10
7 -3.069 -5.021 8.865 10
8 -3.069 -4.903 8.865 1
9 -3.089 -4.864 8.924 9
10 -3.108 -4.903 9.051 13
11 -3.157 -4.903 9.247 8
12 -3.206 -4.893 9.404 9
13 -3.275 -4.883 9.581 11
14 -3.314 -4.726 9.620 10
15 -3.314 -4.805 9.620 1
16 -3.324 -4.756 9.512 9
17 -3.324 -4.667 9.335 11
18 -3.246 -4.589 9.247 9
19 -3.177 -4.560 9.041 11
20 -3.020 -4.560 8.855 9
21 -3.128 -4.540 8.855 1
22 -3.098 -4.628 8.708 10
23 -3.098 -4.628 8.620 9
24 -3.020 -4.687 8.620 1
25 -3.020 -4.687 8.541 9
26 -2.991 -4.775 8.541 1
27 -2.961 -4.805 8.512 10
To get a particular column as an array:
>>> data[2].values
array([8.982, 8.884, 8.816, 8.757, 8.757, 8.816, 8.825, 8.865, 8.865,
8.924, 9.051, 9.247, 9.404, 9.581, 9.62 , 9.62 , 9.512, 9.335,
9.247, 9.041, 8.855, 8.855, 8.708, 8.62 , 8.62 , 8.541, 8.541,
8.512])
Since you have clearly defined comment lines in your file, it is fairly simple to filter them out.
Here's what I came up with:
with open("default__3.txt", "r") as f:
lines = f.readlines()
x_values = []
y_values = []
z_values = []
time_diffs = []
for line in lines:
if line.startswith('#'): # filter out comment lines
continue
tokens = line.split(' ')
if len(tokens) < 4: # filter out blank lines
continue
x_values.append(float(tokens[0]))
y_values.append(float(tokens[1]))
z_values.append(float(tokens[2]))
time_diffs.append(int(tokens[3].strip('\n'))) # remove carriage returns from last token
print(x_values)
print(y_values)
print(z_values)
print(time_diffs)
This puts your values into lists which you can manipulate as you see fit. I used it to print out the following:
[-3.236, -3.206, -3.187, -3.138, -3.138, -3.059, -3.059, -3.069, -3.069, -3.089, -3.108, -3.157, -3.206, -3.275, -3.314, -3.314, -3.324, -3.324, -3.246, -3.177, -3.02, -3.128, -3.098, -3.098, -3.02, -3.02, -2.991, -2.961]
[-4.726, -4.716, -4.716, -4.716, -4.746, -4.815, -4.864, -5.021, -4.903, -4.864, -4.903, -4.903, -4.893, -4.883, -4.726, -4.805, -4.756, -4.667, -4.589, -4.56, -4.56, -4.54, -4.628, -4.628, -4.687, -4.687, -4.775, -4.805]
[8.982, 8.884, 8.816, 8.757, 8.757, 8.816, 8.825, 8.865, 8.865, 8.924, 9.051, 9.247, 9.404, 9.581, 9.62, 9.62, 9.512, 9.335, 9.247, 9.041, 8.855, 8.855, 8.708, 8.62, 8.62, 8.541, 8.541, 8.512]
[1, 10, 10, 10, 1, 9, 10, 10, 1, 9, 13, 8, 9, 11, 10, 1, 9, 11, 9, 11, 9, 1, 10, 9, 1, 9, 1, 10]
Related
I have tried out the following snippet of code for my project:
import pandas as pd
import nltk
from nltk.corpus import wordnet as wn
nltk.download('wordnet')
df=[]
hypo = wn.synset('science.n.01').hyponyms()
hyper = wn.synset('science.n.01').hypernyms()
mero = wn.synset('science.n.01').part_meronyms()
holo = wn.synset('science.n.01').part_holonyms()
ent = wn.synset('science.n.01').entailments()
df = df+hypo+hyper+mero+holo+ent
df_agri_clean = pd.DataFrame(df)
df_agri_clean.columns=["Items"]
print(df_agri_clean)
pd.set_option('display.expand_frame_repr', False)
It has given me this output of a dataframe:
Items
0 Synset('agrobiology.n.01')
1 Synset('agrology.n.01')
2 Synset('agronomy.n.01')
3 Synset('architectonics.n.01')
4 Synset('cognitive_science.n.01')
5 Synset('cryptanalysis.n.01')
6 Synset('information_science.n.01')
7 Synset('linguistics.n.01')
8 Synset('mathematics.n.01')
9 Synset('metallurgy.n.01')
10 Synset('metrology.n.01')
11 Synset('natural_history.n.01')
12 Synset('natural_science.n.01')
13 Synset('nutrition.n.03')
14 Synset('psychology.n.01')
15 Synset('social_science.n.01')
16 Synset('strategics.n.01')
17 Synset('systematics.n.01')
18 Synset('thanatology.n.01')
19 Synset('discipline.n.01')
20 Synset('scientific_theory.n.01')
21 Synset('scientific_knowledge.n.01')
This can be converted to a list by just printing df.
[Synset('agrobiology.n.01'), Synset('agrology.n.01'), Synset('agronomy.n.01'), Synset('architectonics.n.01'), Synset('cognitive_science.n.01'), Synset('cryptanalysis.n.01'), Synset('information_science.n.01'), Synset('linguistics.n.01'), Synset('mathematics.n.01'), Synset('metallurgy.n.01'), Synset('metrology.n.01'), Synset('natural_history.n.01'), Synset('natural_science.n.01'), Synset('nutrition.n.03'), Synset('psychology.n.01'), Synset('social_science.n.01'), Synset('strategics.n.01'), Synset('systematics.n.01'), Synset('thanatology.n.01'), Synset('discipline.n.01'), Synset('scientific_theory.n.01'), Synset('scientific_knowledge.n.01')]
I wish to change every word under "Items" like so :
Synset('agrobiology.n.01') => agrobiology.n.01
or
Synset('agrobiology.n.01') => 'agrobiology'
Any answer associated will be appreciated! Thanks!
To access the name of these items, just do function.name(). You could use line comprehension update these items as follows:
df_agri_clean['Items'] = [df_agri_clean['Items'][i].name() for i in range(len(df_agri_clean))]
df_agri_clean
The output will be as you expected
Items
0 agrobiology.n.01
1 agrology.n.01
2 agronomy.n.01
3 architectonics.n.01
4 cognitive_science.n.01
5 cryptanalysis.n.01
6 information_science.n.01
7 linguistics.n.01
8 mathematics.n.01
9 metallurgy.n.01
10 metrology.n.01
11 natural_history.n.01
12 natural_science.n.01
13 nutrition.n.03
14 psychology.n.01
15 social_science.n.01
16 strategics.n.01
17 systematics.n.01
18 thanatology.n.01
19 discipline.n.01
20 scientific_theory.n.01
21 scientific_knowledge.n.01
To further replace ".n.01" as well from the string, you could do the following:
df_agri_clean['Items'] = [df_agri_clean['Items'][i].name().replace('.n.01', '') for i in range(len(df_agri_clean))]
df_agri_clean
Output (just like your second expected output)
Items
0 agrobiology
1 agrology
2 agronomy
3 architectonics
4 cognitive_science
5 cryptanalysis
6 information_science
7 linguistics
8 mathematics
9 metallurgy
10 metrology
11 natural_history
12 natural_science
13 nutrition.n.03
14 psychology
15 social_science
16 strategics
17 systematics
18 thanatology
19 discipline
20 scientific_theory
21 scientific_knowledge
I have a df that has 24 cols and I want to iterate through each row and append to a list consecutively.
Code below does the job - but it also appends on the index value and, name, and dtype, which I need to remove.
results = []
for row in data.iterrows():
results.append(row)
(0, 1 11.87
2 7.60
3 0.32
4 3.11
5 47.43
6 47.81
7 24.74
8 32.57
9 39.49
10 24.48
11 18.14
12 26.52
13 14.17
14 13.45
15 17.80
16 17.89
17 27.39
18 51.55
19 60.22
20 69.64
21 75.97
22 67.45
23 52.88
24 53.25
Name: 0, dtype: float64)
(1, 1 54.49
2 51.67
3 53.68
4 33.81
5 26.99
6 25.80
7 36.35
8 28.85
9 26.01
10 8.44
11 1.64
12 8.01
13 23.41
14 16.22
15 16.30
16 8.90
17 1.93
18 0.00
19 2.79
20 30.24
21 55.58
22 62.79
23 74.70
24 68.46
Name: 1, dtype: float64)
It's similar to iterating through each row, transposing selected row, then adding them appending onto a list consecutively. If a df is (5, 24) then length of list will be 5*24 = 120.
You don't need to iterate through them. Try this:
inp = [{'c1':10, 'c2':100}, {'c1':11,'c2':110}, {'c1':12,'c2':120}]
df = pd.DataFrame(inp)
print(df)
c1 c2
0 10 100
1 11 110
2 12 120
Now you can use .values.ravel() to create a list of all dataframe values:
list(df.values.ravel())
Output:
[10, 100, 11, 110, 12, 120]
As your question is asked you are likely want a output in tuple/list with corresponding values for each row. the output you are asking for is not a flatten list.
pandas have good funtions to actually use numpy, numpy is a great module to work with when it comes to arrays/lists.
lets say you have a DataFrame called data in this case, if you use data.to_numpy() it will actually output a nested list with values for each row.
output:
[['joe' 'Doe' 34]
['bob' 'Warren' 20]
['Anna' 'Anderson' 10]]
you can even index your list like: data.to_numpy()[0].
you can even .flatten() your list like: data.to_numpy().flatten()
output with .flatten():
['joe' 'Doe' 34 'bob' 'Warren' 20 'Anna' 'Anderson' 10]
you can use a for loop to:
for i in data.to_numpy():
print(i)
this give you every list in the nested list.
Regards.
I have the following coordinate dataframe, divided by blocks. Each block starts at seq0_leftend, seq0_rightend, seq1_leftend, seq1_rightend, seq2_leftend, seq2_rightend, seq3_leftend, seq3_rightend, and so on. I would like that, for each block given the condition if, coordinates are negative, extract the upper and lower row. example of my dataframe file:
seq0_leftend seq0_rightend
0 7 107088
1 107089 108940
2 108941 362759
3 362760 500485
4 500486 509260
5 509261 702736
seq1_leftend seq1_rightend
0 1 106766
1 106767 108619
2 108620 355933
3 355934 488418
4 488419 497151
5 497152 690112
6 690113 700692
7 700693 721993
8 721994 722347
9 722348 946296
10 946297 977714
11 977715 985708
12 -985709 -990725
13 991992 1042023
14 1042024 1259523
15 1259524 1261239
seq2_leftend seq2_rightend
0 1 109407
1 362514 364315
2 109408 362513
3 364450 504968
4 -504969 -515995
5 515996 671291
6 -671295 -682263
7 682264 707010
8 -707011 -709780
9 709781 934501
10 973791 1015417
11 -961703 -973790
12 948955 961702
13 1015418 1069976
14 1069977 1300633
15 -1300634 -1301616
16 1301617 1344821
17 -1515463 -1596433
18 1514459 1515462
19 -1508094 -1514458
20 1346999 1361467
21 -1361468 -1367472
22 1369840 1508093
seq3_leftend seq3_rightend
0 1 112030
1 112031 113882
2 113883 381662
3 381663 519575
4 519576 528317
5 528318 724500
6 724501 735077
7 735078 759456
8 759457 763157
9 763158 996929
10 996931 1034492
11 1034493 1040984
12 -1040985 -1061402
13 1071212 1125426
14 1125427 1353901
15 1353902 1356209
16 1356210 1392818
seq4_leftend seq4_rightend
0 1 105722
1 105723 107575
2 107576 355193
3 355194 487487
4 487488 496220
5 496221 689560
6 689561 700139
7 700140 721438
8 721458 721497
9 721498 947183
10 947184 978601
11 978602 986595
12 -986596 -991612
13 994605 1046245
14 1046247 1264692
15 1264693 1266814
Finally write a new csv with the data of interest, an example of the final result that I would like, would be this:
seq1_leftend seq1_rightend
11 977715 985708
12 -985709 -990725
13 991992 1042023
seq2_leftend seq2_rightend
3 364450 504968
4 -504969 -515995
5 515996 671291
6 -671295 -682263
7 682264 707010
8 -707011 -709780
9 709781 934501
10 973791 1015417
11 -961703 -973790
12 948955 961702
14 1069977 1300633
15 -1300634 -1301616
16 1301617 1344821
17 -1515463 -1596433
18 1514459 1515462
19 -1508094 -1514458
20 1346999 1361467
21 -1361468 -1367472
22 1369840 1508093
seq3_leftend seq3_rightend
11 1034493 1040984
12 -1040985 -1061402
13 1071212 1125426
seq4_leftend seq4_rightend
11 978602 986595
12 -986596 -991612
13 994605 1046245
I assume that you have a list of DataFrames, let's call it src.
To convert a single DataFrame, define the following function:
def findRows(df):
col = df.iloc[:, 0]
if col.lt(0).any():
return df[col.lt(0) | col.shift(1).lt(0) | col.shift(-1).lt(0)]
else:
return None
Note that this function starts with reading column 0 from the source
DataFrame, so it is independent of the name of this column.
Then it checks whether any element in this column is < 0.
If found, the returned object is a DataFrame with rows which
contain a value < 0:
either in this element,
or in the previous element,
or in the next element.
If not found, this function returns None (from your expected result
I see that in such a case you don't want even any empty DataFrame).
The first stage is to collect results of this function called on each
DataFrame from src:
result = [ findRows(df) for df in src ]
An the last part is to filter out elements which are None:
result = list(filter(None.__ne__, result))
To see the result, run:
for df in result:
print(df)
For src containing first 3 of your DataFrames, I got:
seq1_leftend seq1_rightend
11 977715 985708
12 -985709 -990725
13 991992 1042023
seq2_leftend seq2_rightend
3 364450 504968
4 -504969 -515995
5 515996 671291
6 -671295 -682263
7 682264 707010
8 -707011 -709780
9 709781 934501
10 973791 1015417
11 -961703 -973790
12 948955 961702
14 1069977 1300633
15 -1300634 -1301616
16 1301617 1344821
17 -1515463 -1596433
18 1514459 1515462
19 -1508094 -1514458
20 1346999 1361467
21 -1361468 -1367472
22 1369840 1508093
As you can see, the resulting list contains only results
originating from the second and third source DataFrame.
The first was filtered out, since findRows returned
None from its processing.
If my data looks like this
Index Country ted_Val1 sam_Val1 ... ted_Val10 sam_Val10
1 Australia 1 3 ... 20 5
2 Bambua 12 33 ... 15 56
3 Tambua 14 34 ... 10 58
df = pd.DataFrame([["Australia", 1, 3, 20, 5],
["Bambua", 12, 33, 15, 56],
["Tambua", 14, 34, 10, 58]
], columns=["Country", "ted_Val1", "sam_Val1", "ted_Val10", "sam_Val10"]
)
I'd like to subtract all 'val_' columns from all 'ted_' values using a list, creating a new column starting with 'dif_' such that:
Index Country ted_Val1 sam_Val1 diff_Val1 ... ted_Val10 sam_Val10 diff_val10
1 Australia 1 3 -2 ... 20 5 -15
2 Bambua 12 33 12 ... 15 56 -41
3 Tambua 14 34 14... 10 58 -48
so far I've got:
calc_vars = ['ted_Val1',
'sam_Val1',
'ted_Val10',
'sam_Val10']
for i in calc_vars:
df_diff['dif_' + str(i)] = df.['ted_' + str(i)] - df.['sam_' + str(i)]
but I'm getting errors, not sure where to go from here. As a warning this is dummy data and there can be several underscores in the names
IIUC you can use filter to choose the columns for subtraction (assuming your columns are properly sorted like your sample):
print (pd.concat([df, pd.DataFrame(df.filter(like="ted").to_numpy()-df.filter(like="sam").to_numpy(),
columns=["diff"+i.split("_")[-1] for i in df.columns if "ted_Val" in i])],1))
Country ted_Val1 sam_Val1 ted_Val10 sam_Val10 diff1 diff10
0 Australia 1 3 20 5 -2 15
1 Bambua 12 33 15 56 -21 -41
2 Tambua 14 34 10 58 -20 -48
try this,
calc_vars = ['ted_Val1', 'sam_Val1', 'ted_Val10', 'sam_Val10']
# extract even & odd values from calc_vars
# ['ted_Val1', 'ted_Val10'], ['sam_Val1', 'sam_Val10']
for ted, sam in zip(calc_vars[::2], calc_vars[1::2]):
df['diff_' + ted.split("_")[-1]] = df[ted] - df[sam]
Edit: if columns are not sorted,
ted_cols = sorted(df.filter(regex="ted_Val\d+"), key=lambda x : x.split("_")[-1])
sam_cols = sorted(df.filter(regex="sam_Val\d+"), key=lambda x : x.split("_")[-1])
for ted, sam in zip(ted_cols, sam_cols):
df['diff_' + ted.split("_")[-1]] = df[ted] - df[sam]
Country ted_Val1 sam_Val1 ted_Val10 sam_Val10 diff_Val1 diff_Val10
0 Australia 1 3 20 5 -2 15
1 Bambua 12 33 15 56 -21 -41
2 Tambua 14 34 10 58 -20 -48
I am running a loop that appends three fields. Predictfinal is a list, though it is not necessary that it should be a list.
predictfinal.append(y_hat_orig[0])
predictfinal.append(mape)
predictfinal.append(length)
At the end, predictfinal returns a long list. But I really want to conform the list into a Dataframe, where each row is 3 columns. However the list does not designate between the 3 columns, it's just a long list with commas in between. Somehow I am trying to slice predictfinal into 3 columns and a Dataframe from currnet unstructured list - any help how?
predictfinal
Out[88]:
[1433.0459967608983,
1.6407741379111223,
23,
1433.6389125340916,
1.6474721044455922,
22,
1433.867408791692,
1.6756763089082383,
21,
1433.8484984008207,
1.6457581105556003,
20,
1433.6340460965778,
1.6380908467895527,
19,
1437.0294365907992,
1.6147672264908473,
18,
1439.7485102740507,
1.5010415925555876,
17,
1440.950406295299,
1.433891246672529,
16,
1434.837060644701,
1.5252803314930383,
15,
1434.9716303636983,
1.6125952442799232,
14,
1441.3153523102953,
3.2633984339696185,
13,
1435.6932462859334,
3.2703435261200497,
12,
1419.9057834496082,
1.9100005818319687,
11,
1426.0739741342488,
1.947684057178654,
10]
Based on https://stackoverflow.com/a/48347320/6926444
We can achieve it by using zip() and iter(). The code below iterates three elements each time.
res = pd.DataFrame(list(zip(*([iter(data)] * 3))), columns=['a', 'b', 'c'])
Result:
a b c
0 1433.045997 1.640774 23
1 1433.638913 1.647472 22
2 1433.867409 1.675676 21
3 1433.848498 1.645758 20
4 1433.634046 1.638091 19
5 1437.029437 1.614767 18
6 1439.748510 1.501042 17
7 1440.950406 1.433891 16
8 1434.837061 1.525280 15
9 1434.971630 1.612595 14
10 1441.315352 3.263398 13
11 1435.693246 3.270344 12
12 1419.905783 1.910001 11
13 1426.073974 1.947684 10
You could do:
pd.DataFrame(np.array(predictfinal).reshape(-1,3), columns=['origin', 'mape', 'length'])
Output:
origin mape length
0 1433.045997 1.640774 23.0
1 1433.638913 1.647472 22.0
2 1433.867409 1.675676 21.0
3 1433.848498 1.645758 20.0
4 1433.634046 1.638091 19.0
5 1437.029437 1.614767 18.0
6 1439.748510 1.501042 17.0
7 1440.950406 1.433891 16.0
8 1434.837061 1.525280 15.0
9 1434.971630 1.612595 14.0
10 1441.315352 3.263398 13.0
11 1435.693246 3.270344 12.0
12 1419.905783 1.910001 11.0
13 1426.073974 1.947684 10.0
Or you can also modify your loop:
predictfinal = []
for i in some_list:
predictfinal.append([y_hat_orig[0], mape, length])
# output dataframe
pd.DataFrame(predictfinal, columns=['origin', 'mape', 'length'])
Here is a pandas solution
s=pd.Series(l)
s.index=pd.MultiIndex.from_product([range(len(l)//3),['origin','map','len']])
s=s.unstack()
Out[268]:
len map origin
0 23.0 1.640774 1433.045997
1 22.0 1.647472 1433.638913
2 21.0 1.675676 1433.867409
3 20.0 1.645758 1433.848498
4 19.0 1.638091 1433.634046
5 18.0 1.614767 1437.029437
6 17.0 1.501042 1439.748510
7 16.0 1.433891 1440.950406
8 15.0 1.525280 1434.837061
9 14.0 1.612595 1434.971630
10 13.0 3.263398 1441.315352
11 12.0 3.270344 1435.693246
12 11.0 1.910001 1419.905783
13 10.0 1.947684 1426.073974