First time organizing columns in text file - python

I am a first time python user. I have a text file of star data that I need to sort the columns and then just take the data from the V band. I have no idea how to start. Can someone please help even if it's to just get me started?

If you can install Pandas from here then sorting on any column can be done like this:
#!/usr/bin/python
# read_stars.py
import sys
import pandas as pd
filename = sys.argv[1] # or 'star_data.txt'
sep = '\t' # or ',' or ' ', etc.
df = pd.read_csv(filename, sep)
print df.sort(['Band'])
Change the commented lines to better suit your needs. sep from your comment seems the separator may be tabs (so first try '\t' and change until parsing is successful).sys.argv[1] uses the file passed as a command line argument as such:
$ python read_stars.py star_data.txt
JD Magnitude Uncertainty HQuncertainty Band Observer Code \
28 2.456420e+06 16.400 0.073 NaN V PSD
29 2.456421e+06 16.09 0.090 NaN V DKS
... (etc) ...
42 STD NaN NaN NaN
0 STD NaN NaN NaN
[58 rows x 23 columns]
Hope this helps!

Related

How to read space delimited data, two row types, no fixed width and plenty of missing values?

There's lots of good information out there on how to read space-delimited data with missing values if the data is fixed-width.
http://jonathansoma.com/lede/foundations-2017/pandas/opening-fixed-width-files/
Reading space delimited file in Python/Pandas with missing values
ASCII table with consecutive white-spaces as separators and missing data python pandas
I'm currently trying to read Japan's Meteorological Agency typhoon history data which is supposed to have this format, but doesn't actually:
# Header rows:
5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80
::::+::::|::::+::::|::::+::::|::::+::::|::::+::::|::::+::::|::::+::::|::::+::::|
AAAAA BBBB CCC DDDD EEEE F G HHHHHHHHHHHHHHHHHHHH IIIIIIII
# Data rows:
5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80
::::+::::|::::+::::|::::+::::|::::+::::|::::+::::|::::+::::|::::+::::|::::+::::|
AAAAAAAA BBB C DDD EEEE FFFF GGG HIIII JJJJ KLLLL MMMM P
It is very similar to NOAA's hurricane best track data, except that it comma delimited, and missing values were given -999 or NaN, which simplified reading the data. Additionally, Japan's data doesn't actually follow the advertised format. For example, column FFFF in the data rows don't always have width 4. Sometimes it has width 3.
I must say that I'm at a complete loss as how to process this data into a dataframe. I've investigated the pd.read_fwf method, and it initially looked promising until I discovered the malformed columns and the two different row types.
My question:
How can I approach cleaning this data and getting it into a dataframe? I'd just find a different dataset, but honestly I can't find any comprehensive typhoon data anywhere else.
I went a little deep for you here, because I'm assuming you're doing this in the name of science and if I can help someone trying to understand climate change then its a good cause.
After looking the data over I've noticed the issue is relating to the data being stored in a de-normalized structure. There are 2 ways you can approach this issue off the top of my head. Re-Writing the file to another file to load into pandas or dask is what I'll show, since thats probably the easiest way to think about it (but certainly not the most efficient for those that will inevitably roast me in the comments)
Think of this like its Two Separate Tables, with a 1-to-Many relationship. 1 table for Typhoons and another for the data belonging to a given typhoon.
A decent, but not really efficient way would be to rewrite it to a better nested structure, like JSON. Then load the data in using that. Note the 2 distinct types of columns.
Step 1: map the data out
There are really 2 tables in one table here. Each typhoon is going to show up as a row that appears like this:
66666 9119 150 0045 9119 0 6 MIRREILE 19920701
While the records for that typhoon are going to follow that row (think of this as a separate row:
20080100 002 3 178 1107 994 035 00000 0000 30600 0200
Load the File in, reading it as raw lines. By using the .readlines() method, we can read each individual line in as an item in a list.
# load the file as raw input
with open('./test.txt') as f:
lines = f.readlines()
Now that we have that read in, we're going to need to perform some logic to separate some lines from others. It appears the every time there is a Typhoon record, the line is preceded with a '66666', so lets key off that. So, given we look at each individual line in a horribly inefficient loop, we can write some if/else logic to have a look:
if row[:5] == '66666':
# do stuff
else:
# do other stuff
Thats going to be a pretty solid way to separate that logic for now, which will be useful to guide splitting that up. Now, we need to write a loop that will check that for each row:
# initialize list of dicts
collection = []
def write_typhoon(row: str, collection: Dict) -> Dict:
if row[:5] == '66666':
# do stuff
else:
# do other stuff
# read through lines list from the .readlines(), looping sequentially
for line in lines:
write_typhoon(line, collection)
Lastly, we're going to need to write some logic to now extract that data out in some manner within the if/then loop inside the write_typhoon() function. I didn't care to do a whole lot of thinking here, and opted for the simplest I could make it: defining the fwf metadata myself. because "yolo":
def write_typhoon(row: str, collection: Dict) -> Dict:
if row[:5] == '66666':
typhoon = {
"AA":row[:5],
"BB":row[6:11],
"CC":row[12:15],
"DD":row[16:20],
"EE":row[21:25],
"FF":row[26:27],
"GG":row[28:29],
"HH":row[30:50],
"II":row[51:],
"data":[]
}
# clean that whitespace
for key, value in typhoon.items():
if key != 'data':
typhoon[key] = value.strip()
collection.append(typhoon)
else:
sub_data = {
"A":row[:9],
"B":row[9:12],
"C":row[13:14],
"D":row[15:18],
"E":row[19:23],
"F":row[24:32],
"G":row[33:40],
"H":row[41:42],
"I":row[42:46],
"J":row[47:51],
"K":row[52:53],
"L":row[54:57],
"M":row[58:70],
"P":row[71:]
}
# clean that whitespace
for key, value in sub_data.items():
sub_data[key] = value.strip()
collection[-1]['data'].append(sub_data)
return collection
Okay that took me longer than I'm willing to admit. I wont lie. Gave me PTSD flashbacks from writing COBOL programs...
Anyway, now we have a nice, nested data structure in native python types. The fun can begin!
Step 2: Load this into a usable format
To analyze it, I'm assuming you'll want it in pandas (or maybe Dask if its too big). Here is what I was able to come up with along that front:
import pandas as pd
df = pd.json_normalize(
collection,
record_path='data',
meta=["AA","BB","CC","DD","EE","FF","GG","HH","II"]
)
A great reference for that can be found in the answers for this question (particularly the second one, not the selected one)
Put it all together now:
from typing import Dict
import pandas as pd
# load the file as raw input
with open('./test.txt') as f:
lines = f.readlines()
# initialize list of dicts
collection = []
def write_typhoon(row: str, collection: Dict) -> Dict:
if row[:5] == '66666':
typhoon = {
"AA":row[:5],
"BB":row[6:11],
"CC":row[12:15],
"DD":row[16:20],
"EE":row[21:25],
"FF":row[26:27],
"GG":row[28:29],
"HH":row[30:50],
"II":row[51:],
"data":[]
}
for key, value in typhoon.items():
if key != 'data':
typhoon[key] = value.strip()
collection.append(typhoon)
else:
sub_data = {
"A":row[:9],
"B":row[9:12],
"C":row[13:14],
"D":row[15:18],
"E":row[19:23],
"F":row[24:32],
"G":row[33:40],
"H":row[41:42],
"I":row[42:46],
"J":row[47:51],
"K":row[52:53],
"L":row[54:57],
"M":row[58:70],
"P":row[71:]
}
for key, value in sub_data.items():
sub_data[key] = value.strip()
collection[-1]['data'].append(sub_data)
return collection
# read through file sequentially
for line in lines:
write_typhoon(line, collection)
# load to pandas df using json_normalize
df = pd.json_normalize(
collection,
record_path='data',
meta=["AA","BB","CC","DD","EE","FF","GG","HH","II"]
)
print(df.head(20)) # lets see what we've got!
There's someone who might have had the same problem and created a library for it, you can check it out here:
https://github.com/miniufo/besttracks
It also includes a quickstart notebook with loading the same dataset.
Here is how I ended up doing it. The key was realizing there are two types of rows in the data, but within each type the columns are fixed width:
header_fmt = "AAAAA BBBB CCC DDDD EEEE F G HHHHHHHHHHHHHHHHHHHH IIIIIIII"
track_fmt = "AAAAAAAA BBB C DDD EEEE FFFF GGG HIIII JJJJ KLLLL MMMM P"
So, here's how it went. I wrote these two functions to help me reformat the text file int CSV format:
def get_idxs(string, char):
idxs = []
for i in range(len(string)):
if string[i - 1].isalpha() and string[i] == char:
idxs.append(i)
return idxs
def replace(string, idx, replacement):
string = list(string)
try:
for i in idx: string[i] = replacement
except TypeError:
string[idx] = replacement
return ''.join(string)
# test it out
header_fmt = "AAAAA BBBB CCC DDDD EEEE F G HHHHHHHHHHHHHHHHHHHH IIIIIIII"
track_fmt = "AAAAAAAA BBB C DDD EEEE FFFF GGG HIIII JJJJ KLLLL MMMM P"
header_idxs = get_idxs(header_fmt, ' ')
track_idxs = get_idxs(track_fmt, ' ')
print(replace(header_fmt, header_idxs, ','))
print(replace(track_fmt, track_idxs, ','))
Testing the function on the format strings, we see commas were put in the appropriate places:
AAAAA,BBBB, CCC,DDDD,EEEE,F,G,HHHHHHHHHHHHHHHHHHHH, IIIIIIII
AAAAAAAA,BBB,C,DDD,EEEE,FFFF, GGG, HIIII,JJJJ,KLLLL,MMMM, P
So next apply those functions to the .txt and create a .csv file with the output:
from contextlib import ExitStack
from tqdm.notebook import tqdm
with ExitStack() as stack:
read_file = stack.enter_context(open('data/bst_all.txt', 'r'))
write_file = stack.enter_context(open('data/bst_all_clean.txt', 'a'))
for line in tqdm(read_file.readlines()):
if ' ' in line[:8]: # line is header data
write_file.write(replace(line, header_idxs, ',') + '\n')
else: # line is track data
write_file.write(replace(line, track_idxs, ',') + '\n')
The next task is to add the header data to ALL rows, so that all rows have the same format:
header_cols = ['indicator', 'international_id', 'n_tracks', 'cyclone_id', 'international_id_dup',
'final_flag', 'delta_t_fin', 'name', 'last_revision']
track_cols = ['date', 'indicator', 'grade', 'latitude', 'longitude', 'pressure', 'max_wind_speed',
'dir_long50', 'long50', 'short50', 'dir_long30', 'long30', 'short30', 'jp_landfall']
data = pd.read_csv('data/bst_all_clean.txt', names=track_cols, skipinitialspace=True)
data.date = data.date.astype('string')
# Get headers. Header rows have variable 'indicator' which is 5 characters long.
headers = data[data.date.apply(len) <= 5]
data[['storm_id', 'records', 'name']] = headers.iloc[:, [1, 2, 7]]
# Rearrange columns; bring identifiers to the first three columns.
cols = list(data.columns[-3:]) + list(data.columns[:-3])
data = data[cols]
# front fill NaN's for header data
data[['storm_id', 'records', 'name']] = data[['storm_id', 'records', 'name']].fillna(method='pad')
# delete now extraneous header rows
data = data.drop(headers.index)
And that yields some nicely formatted data, like this:
storm_id records name date indicator grade latitude longitude
15 5102.0 37.0 GEORGIA 51031900 2 2 67.0 1614
16 5102.0 37.0 GEORGIA 51031906 2 2 70.0 1625
17 5102.0 37.0 GEORGIA 51031912 2 2 73.0 1635

How to compile multiple excel files in numeric order (file1.xls, file2.xls, etc) into one python file?

I am trying to compile several .xls files together. I found some code that works but it put in the files out of order. The files are names therm_sensor1.xls, therm_sensor2.xls, etc. I need the output to be in numeric order but my current code seems to have them scrambled. I am very new to computer coding so an explanation would be helpful :)
Also my current output has all the data except for the top 6 lines. I have no idea why it is doing this.
import pandas as pd
import glob
glob.glob('therm_sensor*.xls')
all_data = pd.DataFrame()
for f in glob.glob('therm_sensor*.xls'):
df = pd.read_excel(f)
all_data = all_data.append(df, ignore_index=True)
print(all_data.to_string())
Output:
6 1.739592e-05 0.30 NaN
7 2.024840e-05 0.35 NaN
8 2.309999e-05 0.40 NaN
...
502 2.949562e-10 0.95 NaN
503 3.113220e-10 1.00 NaN
I had something similar issue, eventually, I figured out a way. So I am gonna give you the solution that worked for me. One key thing I did was name the column names before passing to dataframe. See if this helps.
fileList=glob.glob("*.csv")
dfList=[]
colnames=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]
for filename in fileList:
print(filename)
df=pd.read_csv(filename, header=None)
dfList.append(df)
concatDf=pd.concat(dfList, axis=0)
concatDf.columns=colnames
#concatDf.to_csv(outfile, index=None) -# You dont need this.
concatenate()
The problem here is (probably) due to the difference in the way humans and computers tend to sort things. Take a list like this:
files = ['file10.xls', 'file2.xls', 'file1.xls']
The computer sorts this list in a way that looks unintuitive to humans (because it goes 1, 10, 2):
>>> sorted(files)
['file1.xls', 'file10.xls', 'file2.xls']
But if you change the sort criteria you can get a more intuitive result. Here, that means isolating the part of the filename that contains the number and turning it into an integer so the computer can sort it correctly:
>>> sorted(files, key=lambda s: int(s[4:-4]))
['file1.xls', 'file2.xls', 'file10.xls']
In your use case, this should do the trick:
sorted(glob.glob('therm_sensor*.xls'), key=lambda s: int(s[12:-4]))

pandas read_csv ignore separator in last column

I have a file with the following structure (first row is the header, filename is test.dat):
ID_OBS LAT LON ALT TP TO LT_min LT_max STATIONNAME
ALT_NOA_000 82.45 -62.52 210.0 FM 0 0.0 24.0 Alert, Nunavut, Canada
How do I instruct pandas to read the entire station name (in this example, Alert, Nunavut, Canada) as a single element? I use delim_whitespace=True in my code, but that does not work, since the station name contains whitespace characters.
Running:
import pandas as pd
test = pd.read_csv('./test.dat', delim_whitespace=True, header=1)
print(test.to_string())
Produces:
ID_OBS LAT LON ALT TP TO LT_min LT_max STATIONNAME
ALT_NOA_000 82.45 -62.52 210.0 FM 0 0.0 24.0 Alert, Nunavut, Canada
Quickly reading through the tutorials did not help. What am I missing here?
I often approach these by writing my own little parser. In general there are ways to bend pandas to your will, but I find this way is often easier:
Code:
import re
def parse_my_file(filename):
with open(filename) as f:
for line in f:
yield re.split(r'\s+', line.strip(), 8)
# build the generator
my_parser = parse_my_file('test.dat')
# first element returned is the columns
columns = next(my_parser)
# build the data frame
df = pd.DataFrame(my_parser, columns=columns)
print(df)
Results:
ID_OBS LAT LON ALT TP TO LT_min LT_max \
0 ALT_NOA_000 82.45 -62.52 210.0 FM 0 0.0 24.0
STATIONNAME
0 Alert, Nunavut, Canada
Your pasted sample file is a bit ambiguous: it's not possible to tell by eye if something that looks like a few spaces is a tab or not, for example.
In general, though, note that plain old Python is more expressive than Pandas, or CSV modules (Pandas's strength is elseswhere). E.g., there are even Python modules for recursive-descent parsers, which Pandas obviously lacks. You can use regular Python to manipulate the file into an easier form for Pandas to parse. For example:
import re
>>> ['#'.join(re.split(r'[ \t]+', l.strip(), maxsplit=8)) for l in open('stuff.tsv') if l.strip()]
['ID_OBS#LAT#LON#ALT#TP#TO#LT_min#LT_max#STATIONNAME',
'ALT_NOA_000#82.45#-62.52#210.0#FM#0#0.0#24.0#Alert, Nunavut, Canada']
changes the delimiter to '#', which, if you write back to a file, for example, you can parse using delimiter='#'.

Importing txt as dataframe in python

I have a txt file with the following format:
[(u'this guy',u'hey there',u'dfd fasd awe wedsad,daeraes',1),
(u'that guy',u'cya',u'dfd fasd es',1),
(u'another guy',u'hi',u'dfawe wedsad,daeraes',-1)]
and I would like to import it in python as a dataframe with 4 columns. I have tried:
trial = []
for line in open('filename.txt','r'):
trial.append(line.rstrip())
which give each line as a text. Using:
import pandas as pd
pd.read_csv('filename.txt', sep=",", header = None)
Using read_csv from pandas and separating in comma it was also taking into consideration the comma inside the text of the variables.
0 1 2 3 4 5
0 [(u'this guy' u'hey there' u'dfd fasd awe wedsad daeraes' 1) NaN
1 (u'that guy' u'cya' u'dfd fasd es' 1) NaN NaN
2 (u'another guy' u'hi' u'dfawe wedsad daeraes' -1)] NaN
Any idea how to overpass that?
Assuming you have the data in data.txt.
py_array = eval(open("data.txt").read())
dataframe = pd.DataFrame(py_array)
Python needs to parse the file first.
It doesn't make sense using read_csv since it isn't close enough to csv format.
I'm assuming you mean python, not matlab.
The data is already a matrix.
aa=[(u'this guy',u'hey there',u'dfd fasd awe wedsad,daeraes',1),
(u'that guy',u'cya',u'dfd fasd es',1),
(u'another guy',u'hi',u'dfawe wedsad,daeraes',-1)]
for i in range(3):
for j in range(4):
print aa[i][j]
output:
this guy
hey there
dfd fasd awe wedsad,daeraes
1
that guy
cya
dfd fasd es
1
another guy
hi
dfawe wedsad,daeraes
-1

reading file with missing values in python pandas

I try to read .txt with missing values using pandas.read_csv. My data is of the format:
10/08/2012,12:10:10,name1,0.81,4.02,50;18.5701400N,4;07.7693770E,7.92,10.50,0.0106,4.30,0.0301
10/08/2012,12:10:11,name2,,,,,10.87,1.40,0.0099,9.70,0.0686
with thousands of samples with same name of the point, gps position, and other readings.
I use a code:
myData = read_csv('~/data.txt', sep=',', na_values='')
The code is wrong as na_values does not gives NaN or other indicator. Columns should have the same size but I finish with different length.
I don't know what exactly should be typed in after na_values (did try all different things).
Thanks
The parameter na_values must be "list like" (see this answer).
A string is "list like" so:
na_values='abc' # would transform the letters 'a', 'b' and 'c' each into `nan`
# is equivalent to
na_values=['a','b','c']
Similarly:
na_values=''
# is equivalent to
na_values=[] # and this is not what you want!
This means that you need to use na_values=[''].
What version of pandas are you on? Interpreting empty string as NaN is the default behavior for pandas and seem to parse the empty strings fine in your data snippet both in v0.7.3 and current master without using the na_values parameter at all.
In [10]: data = """\
10/08/2012,12:10:10,name1,0.81,4.02,50;18.5701400N,4;07.7693770E,7.92,10.50,0.0106,4.30,0.0301
10/08/2012,12:10:11,name2,,,,,10.87,1.40,0.0099,9.70,0.0686
"""
In [11]: read_csv(StringIO(data), header=None).T
Out[11]:
0 1
X.1 10/08/2012 10/08/2012
X.2 12:10:10 12:10:11
X.3 name1 name2
X.4 0.81 NaN
X.5 4.02 NaN
X.6 50;18.5701400N NaN
X.7 4;07.7693770E NaN
X.8 7.92 10.87
X.9 10.5 1.4
X.10 0.0106 0.0099
X.11 4.3 9.7
X.12 0.0301 0.0686

Categories