Remove linebreak at specific position in textfile - python
I have a large textfile, which has linebreaks at column 80 due to console width. Many of the lines in the textfile are not 80 characters long, and are not affected by the linebreak. In pseudocode, this is what I want:
Iterate through lines in file
If line matches this regex pattern: ^(.{80})\n(.+)
Replace this line with a new string consisting of match.group(1) and match.group(2). Just remove the linebreak from this line.
If line doesn't match the regex, skip!
Maybe I don't need regex to do this?
f=open("file")
for line in f:
if len(line)==81:
n=f.next()
line=line.rstrip()+n
print line.rstrip()
f.close()
Here's some code which should to the trick
def remove_linebreaks(textfile, position=81):
"""
textfile : an file opened in 'r' mode
position : the index on a line at which \n must be removed
return a string with the \n at position removed
"""
fixed_lines = []
for line in textfile:
if len(line) == position:
line = line[:position]
fixed_lines.append(line)
return ''.join(fixed_lines)
Note that compared to your pseudo code, this will merge any number of consecutive folded lines.
Consider this.
def merge_lines( line_iter ):
buffer = ''
for line in line_iter:
if len(line) <= 80:
yield buffer + line
buffer= ''
else:
buffer += line[:-1] # remove '\n'
with open('myFile','r') as source:
with open('copy of myFile','w') as destination:
for line in merge_lines( source ):
destination.write(line)
I find that an explicit generator function makes it much easier to test and debug the essential logic of the script without having to create mock filesystems or do lots of fancy setup and teardown for testing.
Here is an example of how to use regular expressions to archive this. But regular expressions aren't the best solution everywhere and in this case, i think not using regular expressions is more efficient. Anyway, here is the solution:
text = re.sub(r'(?<=^.{80})\n', '', text)
You can also use the your regular expression when you call re.sub with a callable:
text = re.sub(r'^(.{80})\n(.+)', lambda m: m.group(1)+m.group(2), text)
Related
Efficient way to check for expected semicolon position length-delimited text file. Combining many "or" statements
I am checking the position of semicolons in text files. I have length-delimited text files having thousands of rows which look like this: AB;2;43234;343; CD;4;41234;443; FE53234;543; FE;5;53;34;543; I am using the following code to check the correct position of the semicolons. If a semicolon is missing where I would expect it, a statement is printed: import glob path = r'C:\path\*.txt' for fname in glob.glob(path): print("Checking file", fname) with open(fname) as f: content = f.readlines() for count, line in enumerate(content): if (line[2:3]!=";" or line[4:5]!=";" or line[10:11]!=";" # really a lot of continuing entries like these or line[14:15]!=";" ): print("\nSemikolon expected, but not found!\nrow:", count+1, "\n", fname, "\n", line) The code works. No error is thrown and it detects the data row. My problem now is that I have a lot of semicolons to check and I have really a lot of continuing entries like or line[xx:xx]!=";" I think this is inefficient regarding two points: It is visually not nice to have these many code lines. I think it could be shortened. It is logically not efficient to have these many splitted or checks. I think it could be more efficient probably decreasing the runtime. I search for an efficient solution which: Improves the readability Most importantly: reduces the runtime (as I think the way it is written now is inefficient, with all the or statements) I only want to check if there are semicolons where I would expect them. Where I need them. I do not care about any additional semicolons in the data fields.
Just going off of what you've written: filename = ... with open(filename) as file: lines = file.readlines() delimiter_indices = (2, 4, 10, 14) # The indices in any given line where you expect to see semicolons. for line_num, line in enumerate(lines): if any(line[index] != ";" for index in delimiter_indices): print(f"{filename}: Semicolon expected on line #{line_num}") If the line doesn't have at least 15 characters, this will raise an exception. Also, lines like ;;;;;;;;;;;;;;; are technically valid. EDIT: Assuming you have an input file that looks like: AB;2;43234;343; CD;4;41234;443; FE;5;53234;543; FE;5;53;34;543; (Note: the blank line at the end) My provided solution works fine. I do not see any exceptions or Semicolon expected on line #... outputs. If your input file ends with two blank lines, this will raise an exception. If your input file contains a blank line somewhere in the middle, this will also raise an exception. If you have lines in your file that are less than 15 characters long (not counting the last line), this will raise an exception. You could simply say that every line must meet two criteria to be considered valid: The current line must be at least 15 characters long (or max(delimiter_indices) + 1 characters long). All characters at delimiter indices in the current line must be semicolons. Code: for line_num, line in enumerate(lines): is_long_enough = len(line) >= (max(delimiter_indices) + 1) has_correct_semicolons = all(line[index] == ';' for index in delimiter_indices) if not (is_long_enough and has_correct_semicolons): print(f"{filename}: Semicolon expected on line #{line_num}") EDIT: My bad, I ruined the short-circuit evaluation for the sake of readability. The following should work: is_valid_line = (len(line) >= (max(delimiter_indices) + 1)) and (all(line[index] == ';' for index in delimiter_indices)) if not is_valid_line: print(f"{filename}: Semicolon expected on line #{line_num}") If the length of the line is not correct, the second half of the expression will not be evaluated due to short-circuit evaluation, which should prevent the IndexError. EDIT: Since you have so many files with so many lines and so many semicolons per line, you could do the max(delimiter_indices) calculation before the loop to avoid having calculate that value for each line. It may not make a big difference, but you could also just iterate over the file object directly (which yields the next line each iteration), as opposed to loading the entire file into memory before you iterate via lines = file.readlines(). This isn't really required, and it's not as cute as using all or any, but I decided to turn the has_correct_semicolons expression into an actual loop that iterates over delimiter indices - that way your error message can be a bit more explicit, pointing to the offending index of the offending line. Also, there's a separate error message for when a line is too short. import glob delimiter_indices = (2, 4, 10, 14) max_delimiter_index = max(delimiter_indices) min_line_length = max_delimiter_index + 1 for path in glob.glob(r"C:\path\*.txt"): filename = path.name print(filename.center(32, "-")) with open(path) as file: for line_num, line in enumerate(file): is_long_enough = len(line) >= min_line_length if not is_long_enough: print(f"{filename}: Line #{line_num} is too short") continue has_correct_semicolons = True for index in delimiter_indices: if line[index] != ";": has_correct_semicolons = False break if not has_correct_semicolons: print(f"{filename}: Semicolon expected on line #{line_num}, character #{index}") print("All files done")
If you just want to validate the structure of the lines, you can use a regex that is easy to maintain if your requirement changes: import re with open(fname) as f: for row, line in enumerate(f, 1): if not re.match(r"[A-Z]{2};\d;\d{5};\d{3};", line): print("\nSemicolon expected, but not found!\nrow:", row, "\n", fname, "\n", line) Regex demo here. If you don't actually care about the content and only want to check the position of the ;, you can simplify the regex to: r".{2};.;.{5};.{3};" Demo for the dot regex.
Regular Expression to find valid words in file
I need to write a function get_specified_words(filename) to get a list of lowercase words from a text file. All of the following conditions must be applied: Include all lower-case character sequences including those that contain a - or ' character and those that end with a ' character. Exclude words that end with a -. The function must only process lines between the start and end marker lines Use this regular expression to extract the words from each relevant line of a file: valid_line_words = re.findall("[a-z]+[-'][a-z]+|[a-z]+[']?|[a-z]+", line) Ensure that the line string is lower case before using the regular expression. Use the optional encoding parameter when opening files for reading. That is your open file call should look like open(filename, encoding='utf-8'). This will be especially helpful if your operating system doesn't set Python's default encoding to UTF-8. The sample text file testing.txt contains this: That are after the start and should be dumped. So should that and that and yes, that *** START OF SYNTHETIC TEST CASE *** Toby's code was rather "interesting", it had the following issues: short, meaningless identifiers such as n1 and n; deep, complicated nesting; a doc-string drought; very long, rambling and unfocused functions; not enough spacing between functions; inconsistent spacing before and after operators, just like this here. Boy was he going to get a low style mark.... Let's hope he asks his friend Bob to help him bring his code up to an acceptable level. *** END OF SYNTHETIC TEST CASE *** This is after the end and should be ignored too. Have a nice day. Here's my code: import re def stripped_lines(lines): for line in lines: stripped_line = line.rstrip('\n') yield stripped_line def lines_from_file(fname): with open(fname, 'rt') as flines: for line in stripped_lines(flines): yield line def is_marker_line(line, start='***', end='***'): min_len = len(start) + len(end) if len(line) < min_len: return False return line.startswith(start) and line.endswith(end) def advance_past_next_marker(lines): for line in lines: if is_marker_line(line): break def lines_before_next_marker(lines): valid_lines = [] for line in lines: if is_marker_line(line): break valid_lines.append(re.findall("[a-z]+[-'][a-z]+|[a-z]+[']?|[a-z]+", line)) for content_line in valid_lines: yield content_line def lines_between_markers(lines): it = iter(lines) advance_past_next_marker(it) for line in lines_before_next_marker(it): yield line def words(lines): text = '\n'.join(lines).lower().split() return text def get_valid_words(fname): return words(lines_between_markers(lines_from_file(fname))) # This must be executed filename = "valid.txt" all_words = get_valid_words(filename) print(filename, "loaded ok.") print("{} valid words found.".format(len(all_words))) print("word list:") print("\n".join(all_words)) Here's my output: File "C:/Users/jj.py", line 45, in <module> text = '\n'.join(lines).lower().split() builtins.TypeError: sequence item 0: expected str instance, list found Here's the expected output: valid.txt loaded ok. 73 valid words found. word list: toby's code was rather interesting it had the following issues short meaningless identifiers such as n and n deep complicated nesting a doc-string drought very long rambling and unfocused functions not enough spacing between functions inconsistent spacing before and after operators just like this here boy was he going to get a low style mark let's hope he asks his friend bob to help him bring his code up to an acceptable level I need help with getting my code to work. Any help is appreciated.
lines_between_markers(lines_from_file(fname)) gives you a list of list of valid words. So you just need to flatten it : def words(lines): words_list = [w for line in lines for w in line] return words_list Does the trick. But I think that you should review the design of your program : lines_between_markers should only yield lines between markers, but it does more. Regexp should be use on the result of this function and not inside the function. What you didn't do : Ensure that the line string is lower case before using the regular expression. Use the optional encoding parameter when opening files for reading. That is your open file call should look like open(filename, encoding='utf-8').
Having problems with strings and arrays
I want to read a text file and copy text that is in between '~~~~~~~~~~~~~' into an array. However, I'm new in Python and this is as far as I got: with open("textfile.txt", "r",encoding='utf8') as f: searchlines = f.readlines() a=[0] b=0 for i,line in enumerate(searchlines): if '~~~~~~~~~~~~~' in line: b=b+1 if '~~~~~~~~~~~~~' not in line: if 's1mb4d' in line: break a.insert(b,line) This is what I envisioned: First I read all the lines of the text file, then I declare 'a' as an array in which text should be added, then I declare 'b' because I need it as an index. The number of lines in between the '~~~~~~~~~~~~~' is not even, that's why I use 'b' so I can put lines of text into one array index until a new '~~~~~~~~~~~~~' was found. I check for '~~~~~~~~~~~~~', if found I increase 'b' so I can start adding lines of text into a new array index. The text file ends with 's1mb4d', so once its found, the program ends. And if '~~~~~~~~~~~~~' is not found in the line, I add text to the array. But things didn't go well. Only 1 line of the entire text between those '~~~~~~~~~~~~~' is being copied to the each array index. Here is an example of the text file: ~~~~~~~~~~~~~ Text123asdasd asdasdjfjfjf ~~~~~~~~~~~~~ 123abc 321bca gjjgfkk ~~~~~~~~~~~~~
You could use regex expression, give a try to this: import re input_text = ['Text123asdasd asdasdjfjfjf','~~~~~~~~~~~~~','123abc 321bca gjjgfkk','~~~~~~~~~~~~~'] a = [] for line in input_text: my_text = re.findall(r'[^\~]+', line) if len(my_text) != 0: a.append(my_text) What it does is it reads line by line looks for all characters but '~' if line consists only of '~' it ignores it, every line with text is appended to your a list afterwards. And just because we can, oneliner (excluding import and source ofc): import re lines = ['Text123asdasd asdasdjfjfjf','~~~~~~~~~~~~~','123abc 321bca gjjgfkk','~~~~~~~~~~~~~'] a = [re.findall(r'[^\~]+', line) for line in lines if len(re.findall(r'[^\~]+', line)) != 0]
In python the solution to a large part of problems is often to find the right function from the standard library that does the job. Here you should try using split instead, it should be way easier. If I understand correctly your goal, you can do it like that : joined_lines = ''.join(searchlines) result = joined_lines.split('~~~~~~~~~~') The first line joins your list of lines into a sinle string, and then the second one cut that big string every times it encounters the '~~' sequence.
I tried to clean it up to the best of my knowledge, try this and let me know if it works. We can work together on this!:) with open("textfile.txt", "r",encoding='utf8') as f: searchlines = f.readlines() a = [] currentline = '' for i,line in enumerate(searchlines): currentline += line if '~~~~~~~~~~~~~' in line: a.append(currentline) elif 's1mb4d' in line: break Some notes: You can use elif for your break function Append will automatically add the next iteration to the end of the array currentline will continue to add text on each line as long as it doesn't have 's1mb4d' or the ~~~ which I think is what you want
s = [''] with open('path\\to\\sample.txt') as f: for l in f: a = l.strip().split("\n") s += a a = [] for line in s: my_text = re.findall(r'[^\~]+', line) if len(my_text) != 0: a.append(my_text) print a >>> [['Text123asdasd asdasdjfjfjf'], ['123abc 321bca gjjgfkk']]
If you're willing to impose/accept the constraint that the separator should be exactly 13 ~ characters (actually '\n%s\n' % ( '~' * 13) to be specific) ... then you could accomplish this for relatively normal sized files using just #!/usr/bin/python ## (Should be #!/usr/bin/env python; but StackOverflow's syntax highlighter?) separator = '\n%s\n' % ('~' * 13) with open('somefile.txt') as f: results = f.read().split(separator) # Use your results, a list of the strings separated by these separators. Note that '~' * 13 is a way, in Python, of constructing a string by repeating some smaller string thirteen times. 'xx%sxx' % 'YY' is a way to "interpolate" one string into another. Of course you could just paste the thirteen ~ characters into your source code ... but I would consider constructing the string as shown to make it clear that the length is part of the string's specification --- that this is part of your file format requirements ... and that any other number of ~ characters won't be sufficient. If you really want any line of any number of ~ characters to serve as a separator than you'll want to use the .split() method from the regular expressions module rather than the .split() method provided by the built-in string objects. Note that this snippet of code will return all of the text between your separator lines, including any newlines they include. There are other snippets of code which can filter those out. For example given our previous results: # ... refine results by filtering out newlines (replacing them with spaces) results = [' '.join(each.split('\n')) for each in results] (You could also use the .replace() string method; but I prefer the join/split combination). In this case we're using a list comprehension (a feature of Python) to iterate over each item in our results, which we're arbitrarily naming each), performing our transformation on it, and the resulting list is being boun back to the name results; I highly recommend learning and getting comfortable with list comprehension if you're going to learn Python. They're commonly used and can be a bit exotic compared to the syntax of many other programming and scripting languages). This should work on MS Windows as well as Unix (and Unix-like) systems because of how Python handles "universal newlines." To use these examples under Python 3 you might have to work a little on the encodings and string types. (I didn't need to for my Python3.6 installed under MacOS X using Homebrew ... but just be forewarned).
complex regex matches in python
I have a txt file that contains the following data: chrI ATGCCTTGGGCAACGGT...(multiple lines) chrII AGGTTGGCCAAGGTT...(multiple lines) I want to first find 'chrI' and then iterate through the multiple lines of ATGC until I find the xth char. Then I want to print the xth char until the yth char. I have been using regex but once I have located the line containing chrI, I don't know how to continue iterating to find the xth char. Here is my code: for i, line in enumerate(sacc_gff): for match in re.finditer(chromo_val, line): print(line) for match in re.finditer(r"[ATGC]{%d},{%d}\Z" % (int(amino_start), int(amino_end)), line): print(match.group()) What the variables mean: chromo_val = chrI amino_start = (some start point my program found) amino_end = (some end point my program found) Note: amino_start and amino_end need to be in variable form. Please let me know if I could clarify anything for you, Thank you.
It looks like you are working with fasta data, so I will provide an answer with that in mind, but if it isn't you can use the sub_sequence selection part still. fasta_data = {} # creates an empty dictionary with open( fasta_file, 'r' ) as fh: for line in fh: if line[0] == '>': seq_id = line.rstrip()[1:] # strip newline character and remove leading '>' character fasta_data[seq_id] = '' else: fasta_data[seq_id] += line.rstrip() # return substring from chromosome 'chrI' with a first character at amino_start up to but not including amino_end sequence_string1 = fasta_data['chrI'][amino_start:amino_end] # return substring from chromosome 'chrII' with a first character at amino_start up to and including amino_end sequence_string2 = fasta_data['chrII'][amino_start:amino_end+1] fasta format: >chr1 ATTTATATATAT ATGGCGCGATCG >chr2 AATCGCTGCTGC
Since you are working with fasta files which are formatted like this: >Chr1 ATCGACTACAAATTT >Chr2 ACCTGCCGTAAAAATTTCC and are a bioinformatics major I am guessing you will be manipulating sequences often I recommend install the perl package called FAST. Once this is installed to get the 2-14 character of every sequence you would do this: fascut 2..14 fasta_file.fa Here is the recent publication for FAST and github that contains a whole toolbox for manipulating molecule sequence data on the command line.
Splitting lines in python based on some character
Input: !,A,56281,12/12/19,19:34:12,000.0,0,37N22.714,121W55.576,+0013!,A,56281,12/1 2/19,19:34:13,000.0,0,37N22.714,121W55.576,+0013!,A,56281,12/12/19,19:34:14,000. 0,0,37N22.714,121W55.576,+0013!,A,56281,12/12/19,19:34:15,000.0,0,37N22.714,121W 55.576,+0013!,A,56281,12/12/19,19:34:16,000.0,0,37N22.714,121W55.576,+0013!,A,56 281,12/12/19,19:34:17,000.0,0,37N22.714,121W55.576,+0013!,A,56281,12/12/19,19:34 :18,000.0,0,37N22.714,121W55.576,+0013!,A,56281,12/12/19,19:34:19,000.0,0,37N22. Output: !,A,56281,12/12/19,19:34:12,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:13,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:14,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:15,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:16,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:17,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:18,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:19,000.0,0,37N22. '!' is the starting character and +0013 should be the ending of each line (if present). Problem which I am getting: Output is like : !,A,56281,12/12/19,19:34:12,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/1 2/19,19:34:13,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:14,000. 0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:15,000.0,0,37N22.714,121W Any help would be highly appreciated...!!! My code: file_open= open('sample.txt','r') file_read= file_open.read() file_open2= open('output.txt','w+') counter =0 for i in file_read: if '!' in i: if counter == 1: file_open2.write('\n') counter= counter -1 counter= counter +1 file_open2.write(i)
You can try something like this: with open("abc.txt") as f: data=f.read().replace("\r\n","") #replace the newlines with "" #the newline can be "\n" in your system instead of "\r\n" ans=filter(None,data.split("!")) #split the data at '!', then filter out empty lines for x in ans: print "!"+x #or write to some other file .....: !,A,56281,12/12/19,19:34:12,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:13,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:14,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:15,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:16,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:17,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:18,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:19,000.0,0,37N22.
Could you just use str.split? lines = file_read.split('!') Now lines is a list which holds the split data. This is almost the lines you want to write -- The only difference is that they don't have trailing newlines and they don't have '!' at the start. We can put those in easily with string formatting -- e.g. '!{0}\n'.format(line). Then we can put that whole thing in a generator expression which we'll pass to file.writelines to put the data in a new file: file_open2.writelines('!{0}\n'.format(line) for line in lines) You might need: file_open2.writelines('!{0}\n'.format(line.replace('\n','')) for line in lines) if you find that you're getting more newlines than you wanted in the output. A few other points, when opening files, it's nice to use a context manager -- This makes sure that the file is closed properly: with open('inputfile') as fin: lines = fin.read() with open('outputfile','w') as fout: fout.writelines('!{0}\n'.format(line.replace('\n','')) for line in lines)
Another option, using replace instead of split, since you know the starting and ending characters of each line: In [14]: data = """!,A,56281,12/12/19,19:34:12,000.0,0,37N22.714,121W55.576,+0013!,A,56281,12/1 2/19,19:34:13,000.0,0,37N22.714,121W55.576,+0013!,A,56281,12/12/19,19:34:14,000. 0,0,37N22.714,121W55.576,+0013!,A,56281,12/12/19,19:34:15,000.0,0,37N22.714,121W 55.576,+0013!,A,56281,12/12/19,19:34:16,000.0,0,37N22.714,121W55.576,+0013!,A,56 281,12/12/19,19:34:17,000.0,0,37N22.714,121W55.576,+0013!,A,56281,12/12/19,19:34 :18,000.0,0,37N22.714,121W55.576,+0013!,A,56281,12/12/19,19:34:19,000.0,0,37N22.""".replace('\n', '') In [15]: print data.replace('+0013!', "+0013\n!") !,A,56281,12/12/19,19:34:12,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:13,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:14,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:15,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:16,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:17,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:18,000.0,0,37N22.714,121W55.576,+0013 !,A,56281,12/12/19,19:34:19,000.0,0,37N22.
Just for some variance, here is a regular expression answer: import re outputFile = open('output.txt', 'w+') with open('sample.txt', 'r') as f: for line in re.findall("!.+?(?=!|$)", f.read(), re.DOTALL): outputFile.write(line.replace("\n", "") + '\n') outputFile.close() It will open the output file, get the contents of the input file, and loop through all the matches using the regular expression !.+?(?=!|$) with the re.DOTALL flag. The regular expression explanation & what it matches can be found here: http://regex101.com/r/aK6aV4 After we have a match, we strip out the new lines from the match, and write it to the file.
Let's try to add a \n before every "!"; then let python splitlines :-) : file_read.replace("!", "!\n").splitlines()
I will actually implement as a generator so that you can work on the data stream rather than the entire content of the file. This will be quite memory friendly if working with huge files >>> def split_on_stream(it,sep="!"): prev = "" for line in it: line = (prev + line.strip()).split(sep) for parts in line[:-1]: yield parts prev = line[-1] yield prev >>> with open("test.txt") as fin: for parts in split_on_stream(fin): print parts ,A,56281,12/12/19,19:34:12,000.0,0,37N22.714,121W55.576,+0013 ,A,56281,12/12/19,19:34:13,000.0,0,37N22.714,121W55.576,+0013 ,A,56281,12/12/19,19:34:14,000.0,0,37N22.714,121W55.576,+0013 ,A,56281,12/12/19,19:34:15,000.0,0,37N22.714,121W55.576,+0013 ,A,56281,12/12/19,19:34:16,000.0,0,37N22.714,121W55.576,+0013 ,A,56281,12/12/19,19:34:17,000.0,0,37N22.714,121W55.576,+0013 ,A,56281,12/12/19,19:34:18,000.0,0,37N22.714,121W55.576,+0013 ,A,56281,12/12/19,19:34:19,000.0,0,37N22.