Parsing a file in python - python

Caveat emptor: I can spell p-y-t-h-o-n and that's pretty much all there is to my knowledge. I tried to take some online classes but after about 20 lectures learning not much, I gave up long time ago. So, what I am going to ask is very simple but I need help:
I have a file with the following structure:
object_name_here:
object_owner:
- me#my.email.com
- user#another.email.com
object_id: some_string_here
identification: some_other_string_here
And this block repeats itself hundreds of times in the same file.
Other than object_name_here being unique and required, all other lines may or may not be present, email addresses can be from none to 10+ different email addresses.
what I want to do is to export this information into a flat file, likes of /etc/passwd, with a twist
for instance, I want the block above to yield a line like this:
object_name_here:object_owner=me#my_email.com,user#another.email.com:objectid=some_string_here:identification=some_other_string_here
again, the number of fields or length of the content fields are not fixed by any means. I am sure this is pretty easy task to accomplish with python but how, I don't know. I don't even know where to start from.
Final Edit: Okay, I am able to write a shell script (bash, ksh etc.) to parse the information, but, when I asked this question originally, I was under the impression that, python had a simpler way of handling uniform or semi-uniform data structures as this one. My understanding was proven to be not very accurate. Sorry for wasting your time.

As jaypb points out, regular expressions are a good idea here. If you're interested in some python 101, I'll give you some simple code to get you started on your own solution.
The following code is a quick and dirty way to lump every six lines of a file into one line of a new file:
# open some files to read and write
oldfile = open("oldfilename","r")
newfile = open("newfilename","w")
# initiate variables and iterate over the input file
count = 0
outputLine = ""
for line in oldfile:
# we're going to append lines in the file to the variable outputLine
# file.readline() will return one line of a file as a string
# str.strip() will remove whitespace at the beginning and end of a string
outputLine = outputLine + oldfile.readline().strip()
# you know your interesting stuff is six lines long, so
# reset the output string and write it to file every six lines
if count%6 == 0:
newfile.write(outputLine + "\n")
outputLine = ""
# increment the counter
count = count + 1
# clean up
oldfile.close()
newfile.close()
This isn't exactly what you want to do but it gets you close. For instance, if you want to get rid of " - " from the beginning of the email addresses and replace it with "=", instead of just appending to outputLine you'd do something like
if some condition:
outputLine = outputLine + '=' + oldfile.readline()[3:]
that last bit is a python slice, [3:] means "give me everything after the third element," and it works for things like strings or lists.
That'll get you started. Use google and the python docs (for instance, googling "python strip" takes you to the built-in types page for python 2.7.10) to understand every line above, then change things around to get what you need.

Since you are replacing text substrings with different text substrings, this is a pretty natural place to use regular expressions.
Python, fortunately, has an excellent regular expressions library called re.
You will probably want to heavily utilize
re.sub(pattern, repl, string)
Look at the documentation here:
https://docs.python.org/3/library/re.html
Update: Here's an example of how to use the regular expression library:
#!/usr/bin/env python
import re
body = None
with open("sample.txt") as f:
body = f.read()
# Replace emails followed by other emails
body = re.sub(" * - ([a-zA-Z.#]*)\n * -", r"\1,", body)
# Replace declarations of object properties
body = re.sub(" +([a-zA-Z_]*): *[\n]*", r"\1=", body)
# Strip newlines
body = re.sub(":?\n", ":", body)
print (body)
Example output:
$ python example.py
object_name_here:object_owner=me#my.email.com, user#another.email.com:object_id=some_string_here:identification=some_other_string_here

Related

Removing an imported text file (Python)

I'm trying to remove a couple of lines from a text file that I imported from my Kindle. The text looks like:
Shall I come to you?
Nicholls David, One Day, loc. 876-876
Dexter looked up at the window of the flat where Emma used to live.
Nicholls David, One Day, loc. 883-884
I want to grab the bin bag and do a forensics
Sophie Kinsella, I've Got Your Number, loc. 64-64
The complete file is longer, this is just a piece of document. The aim with my code is to remove all lines where "loc. " is written so that just the extracts remain. My target can be also seen as removing the line which is just before the blank line.
My code so far look like this:
f = open('clippings_export.txt','r', encoding='utf-8')
message = f.read()
line=message[0:400]
f.close()
key=["l","o","c","."," "]
for i in range(0,len(line)-5):
if line[i]==key[0]:
if line[i+1]==key[1]:
if line[i + 2]==key[2]:
if line[i + 3]==key[3]:
if line[i + 4]==key[4]:
The last if finds exactly the position (indices) where each "loc. " is located in file. Nevertheless, after this stage I do not know how to go back in the line so that the code catches where the line starts, and it can be completely remove. What could I do next? Do you recommend me another way to remove this line?
Thanks in advance!
I think that the question might be a bit misleading!
Anyway, if you simply want to remove those lines, you need to check whether they contain the "loc." substring. Probably the easiest way is to use the in operator.
Instead of getting whole file from read() function, read the file line by line (using the readlines() function for example). You can then check if it contains your key and omit it if it does.
Since the result is now list of strings, you might want to merge it: str.join().
Here I used another list to store desired lines, you can also use "more pythonic" filter() or list comprehension (example in similar question I mentioned below).
f = open('clippings_export.txt','r', encoding='utf-8')
lines = f.readlines()
f.close()
filtered_lines = []
for line in lines:
if "loc." in line:
continue
else:
filtered_lines.append(line)
result = ""
result = result.join(filtered_lines)
By the way, I thought it might be a duplicate - Here's question about the opposite (that is wanting lines which contain the key).

Compare two text files in ruby

I have two text files file1.txt and file2.txt. I want to find the difference b/w the file which will highlight the equal, insertion and deletion text. The final goal is to create a html file which will have the text (equal, insertion and deletion text) highlighted with different color and styles.
file1.txt
I am testing this ruby code for printing the file diff.
file2.txt
I am testing this code for printing the file diff.
I am using this code
doc1 = File.open('file1.txt').read
doc2 = open('file2.txt').read
final_doc = Diffy::Diff.new(doc1, doc2).each_chunk.to_a
The output is :
-I am testing this ruby code for printing the file diff.
+I am testing this code for printing the file diff.
However, I need the output in similar to below format.
equal:
I am testing this
insertion:
ruby
equal:
code for printing the file diff.
In python there is a difflib through which it can be achieved but I have not found such functionality in the Ruby.
I've found there's a few different libraries in Ruby for doing "Diffs", but they're more focused on checking line by line. I created some code that is used to compare a couple of relatively short strings and show the differences, a sort of quick hack that works great if it doesn't matter too much about highlighting the removed sections in the parts that they were removed from - to do that would require just a bit more thinking about the algorith. But this code works wonders for a small amount of text at a time.
The key is, like with any language processing, getting your tokenization right. You can't just process a string word by word. Really the best way would be to first loop through, recursively, and associate each token with a position in the text and use that to do the analysis, but this method below is fast and easy.
def self.change_differences(text1,text2) #oldtext, newtext
result = ""
tokens = text2.split(/(?<=[?.!,])/) #Positive look behind regexp.
for token in tokens
if text1.sub!(token,"") #Yes it contained it.
result += "<span class='diffsame'>" + token + "</span>"
else
result += "<span class='diffadd'>" + token + "</span>"
end
end
tokens = text1.split(/(?<=[?.!,])/) #Positive look behind regexp.
for token in tokens
result += "<span class='diffremove'>"+token+"</span>"
end
return result
end
Source: me!

Python regex to find characters unsupported by XML 1.0 returns no results

I'm writing a Python 3.2 script to find characters in a Unicode XML-formatted text file which aren't valid in XML 1.0. The file itself isn't XML 1.0, so it could easily contain characters supported in 1.1 and later, but the application which uses it can only handle characters valid in XML 1.0 so I need to find them.
XML 1.0 doesn't support any characters in the range \u0001-\u0020, except for \u0009, \u000A, \u000D, and \u0020. Above that, \u0021-\uD7FF and \u010000-\u10FFFF are also supported ranges, but nothing else. In my Python code, I define that regex pattern this way:
re.compile("[^\u0009\u000A\u000D\u0020\u0021-\uD7FF\uE000-\uFFFD\u010000-\u10FFFF]")
However, the code below isn't finding a known bad character in my sample file (\u0007, the 'bell' character.) Unfortunately I can't provide a sample line (proprietary data).
I think the problem is in one of two places: Either a bad regex pattern, or how I'm opening the file and reading in lines—i.e. an encoding problem. I could be wrong, of course.
Here's the relevant code snippet.
processChunkFile() takes three parameters: chunkfile is an absolute path to a file (a 'chunk' of 500,000 lines of the original file, in this case) which may or may not contain a bad character. outputfile is an absolute path to an optional, pre-existing file to write output to. verbose is a boolean flag to enable more verbose command-line output. The rest of the code is just getting command-line arguments (using argparse) and breaking the single large file up into smaller files. (The original file's typically larger than 4GB, hence the need to 'chunk' it.)
def processChunkFile(chunkfile, outputfile, verbose):
"""
Processes a given chunk file, looking for XML 1.0 chars.
Outputs any line containing such a character.
"""
badlines = []
if verbose:
print("Processing file {0}".format(os.path.basename(chunkfile)))
# open given chunk file and read it as a list of lines
with open(chunkfile, 'r') as chunk:
chunklines = chunk.readlines()
# check to see if a line contains a bad character;
# if so, add it to the badlines list
for line in chunklines:
if badCharacterCheck(line, verbose) == True:
badlines.append(line)
# output to file if required
if outputfile is not None:
with open(outputfile.encode(), 'a') as outfile:
for badline in badlines:
outfile.write(str(badline) + '\n')
# return list of bad lines
return badlines
def badCharacterCheck(line, verbose):
"""
Use regular expressions to seek characters in a line
which aren't supported in XML 1.0.
"""
invalidCharacters = re.compile("[^\u0009\u000A\u000D\u0020\u0021-\uD7FF\uE000-\uFFFD\u010000-\u10FFFF]")
matches = re.search(invalidCharacters, line)
if matches:
if verbose:
print(line)
print("FOUND: " + matches.groups())
return True
return False
\u010000
Python \u escapes are four-digit only, so that U+0100 followed by two U+0030 Digit Zeros. Use capital-U escape with eight digits for characters outside the BMP:
\U00010000-\U0010FFFF
Note that this and your expression in general won't work on ‘narrow builds’ of Python where strings are based on UTF-16 code units and characters outside the BMP are handled as two surrogate code units. (Narrow build were the default for Windows. Thankfully they go away with Python 3.3.)
it could easily contain characters supported in 1.1 and later
(Although XML 1.1 can only contain those characters when they're encoded as numeric character references &#...;, so the file itself may still not be well-formed.)
open(chunkfile, 'r')
Are you sure the chunkfile is encoded in locale.getpreferredencoding?
The original file's typically larger than 4GB, hence the need to 'chunk' it.
Ugh, monster XML is painful. But with sensible streaming APIs (and filesystems!) it should still be possible to handle. For example here, you could process each line one at a time using for line in chunk: instead of reading all of the chunk at once using readlines().
re.search(invalidCharacters, line)
As invalidCharacters is already a compiled pattern object you can just invalidCharacters.search(...).
Having said all that, it still matches U+0007 Bell for me.
The fastest way to remove words, characters, strings or anything between two known tags or two known characters in a string is by using a direct and Native C approach using RE along with a Common as shown below.
var = re.sub('<script>', '<!--', var)
var = re.sub('</script>', '-->', var)
#And finally
var = re.sub('<!--.*?-->', '', var)
It removes everything and works faster, better and cleaner than Beautiful Soup. Batch files are where the "" got there beginnings and were only borrowed for use with batch and html from native C". When using all Pythonic methods with regular expressions you have to realize that Python has not altered or changed much from all regular expressions used by Machine Language so why iterate many times when a single loop can find it all as one chunk in one iteration? Do the same individually with Characters also.
var = re.sub('\[', '<!--', var)
var = re.sub('\]', '-->', var)
#And finally
var = re.sub('<!--.*?-->', '' var)#wipes it all out from between along with.
And you do not need Beautiful Soup. You can also scalp data using them if you understand how this works.

Putting parts of a text file into a list

I have this text file and I need certain parts of it to be inserted into a list.
The file looks like:
blah blah
.........
item: A,B,C.....AA,BB,CC....
Other: ....
....
I only need to rip out the A,B,C.....AA,BB,CC..... parts and put them into a list. That is, everything after "Item:" and before "Other:"
This can be easily done with small input, but the problem is that it may contain a large number of items and text file may be pretty huge. Would using rfind and strip be as efficient for huge input as for small input, algorithmically speaking?
What would be an efficient way to do it?
I can see no need for rfind() nor strip().
It looks like you're simply trying to do:
start = 'item: '
end = 'Other: '
should_append = False
the_list = []
for line in open('file').readlines():
if line.startswith(start):
data = line[len(start):]
the_list.append(data)
should_append = True
elif line.startswith(end):
should_append = False
break
elif should_append:
the_list.append(line)
print the_list
This doesn't hold the whole file in memory, just the current line and the list of lines found between the start and the end patterns.
To answer the question about efficiency specifically, reading in the file and comparing it line by line will net O(n) average case performance.
Example by Code:
pattern = "item:"
with open("file.txt", 'r') as f:
for line in f:
if line.startswith(pattern):
# You can do what you like with it; split it along whitespace or a character, then put it into a list.
You're searching the entire file sequentially, and you have to compare some number of elements in the file before you come across the element you're looking for.
You have the option of building a search tree instead. While it costs O(n) to build, it would cost O(logkn) time to search (resulting in O(n) time overall, again), where k is the number of starting characters you'd have in your list.
Though I usually jump at the chance to employ regular expressions, I feel like for a single occurrence in a large file, it would be much more work and too computationally expensive to use regex. So perhaps the straightforward answer (in python) would be most appropriate:
s = 'item:'
yourlist = next(line[len(s)+1:].split(',') for line in open("c:\zzz.txt") if line.startswith(s))
This, of course, assumes that 'item:' doesn't exist on any other lines that are NOT followed by 'other:', but in the event 'item:' exists only once and at the start of the line, this simple generator should work for your purposes.
This problem is simple enough that it really only has two states, so you could just use a Boolean variable to keep track of what you are doing. But the general case for problems like this is to write a state machine that transitions from one state to the next until it has worked its way through the problem.
I like to use enums for states; unfortunately Python doesn't really have a built-in enum. So I am using a class with some class variables to store the enums.
Using the standard Python idiom for line in f (where f is the open file object) you get one line at a time from the text file. This is an efficient way to process files in Python; your initial lines, which you are skipping, are simply discarded. Then when you collect items, you just keep the ones you want.
This answer is written to assume that "item:" and "Other:" never occur on the same line. If this can ever happen, you need to write code to handle that case.
EDIT: I made the start_code and stop_code into arguments to the function, instead of hard-coding the values from the example.
import sys
class States:
pass
States.looking_for_item = 1
States.collecting_input = 2
def get_list_from_file(fname, start_code, stop_code):
lst = []
state = States.looking_for_item
with open(fname, "rt") as f:
for line in f:
l = line.lstrip()
# Don't collect anything until after we find "item:"
if state == States.looking_for_item:
if not l.startswith(start_code):
# Discard input line; stay in same state
continue
else:
# Found item! Advance state and start collecting stuff.
state = States.collecting_input
# chop out start_code
l = l[len(start_code):]
# Collect everything after "item":
# Split on commas to get strings. Strip white-space from
# ends of strings. Append to lst.
lst += [s.strip() for s in l.split(",")]
elif state == States.collecting_input:
if not l.startswith(stop_code):
# Continue collecting input; stay in same state
# Split on commas to get strings. Strip white-space from
# ends of strings. Append to lst.
lst += [s.strip() for s in l.split(",")]
else:
# We found our terminating condition! Don't bother to
# update the state variable, just return lst and we
# are done.
return lst
else:
print("invalid state reached somehow! state: " + str(state))
sys.exit(1)
lst = get_list_from_file(sys.argv[1], "item:", "Other:")
# do something with lst; for now, just print
print(lst)
I wrote an answer that assumes that the start code and stop code must occur at the start of a line. This answer also assumes that the lines in the file are reasonably short.
You could, instead, read the file in chunks, and check to see if the start code exists in the chunk. For this simple check, you could use if code in chunk (in other words, use the Python in operator to check for a string being contained within another string).
So, read a chunk, check for start code; if not present discard the chunk. If start code present, begin collecting chunks while searching for the stop code. In a recent Python version you can concatenate the blocks one at a time with reasonable performance. (In an old version of Python you should store the chunks in a list, then use the .join() method to join the chunks together.)
Once you have built a string that holds data from the start code to the end code, you can use .find() and .rfind() to find the start code and end code, and then cut out just the data you want.
If the start code and stop code can occur more than once in the file, wrap all of the above in a loop and loop until end of file is reached.

How to search for string in Python by removing line breaks but return the exact line where the string was found?

I have a bunch of PDF files that I have to search for a set of keywords against. I have to extract the exact line where the keyword was found. I first used xpdf's pdf2text to convert the file to PDF. (Tried solr but had a tough time tailoring the output/schema to suit my requirement).
import sys
file_name = sys.argv[1]
searched_string = sys.argv[2]
result = [(line_number+1, line) for line_number, line in enumerate(open(file_name)) if searched_string.lower() in line.lower()]
#print result
for each in result:
print each[0], each[1]
ThinkCode:~$ python find_string.py sample.txt "String Extraction"
The problem I have with this is that for cases where search string is broken towards the end of the line :
If you are going to index large binary files, remember to change the
size limits. String
Extraction is a common problem
If I am searching for 'String Extraction', I will miss this keyword if I use the code presented above. What is the most efficient way of achieving this without making 2 copies of text file (one for searching the keyword to extract the line (number) and the other for removing line breaks and finding the keyword to eliminate the case where the keyword spans across 2 lines).
Much appreciated guys!
Note: Some considerations without any code, but I think they belong to an answer rather than to a comment.
My idea would be to search only for the first keyword; if a match is found, search for the second. This allows you to, if the match is found at the end of the line, take into consideration the next line and do line concatenation only if a match is found in first place*.
Edit:
Coded a simple example and ended up using a different algorithm; the basic idea behind it is this code snippet:
def iterwords(fh):
for number, line in enumerate(fh):
for word in re.split(r'\s+', line.strip()):
yield number, word
It iterates over the file handler and produces a (line_number, word) tuple for each word in the file.
The matching afterwards becomes pretty easy; you can find my implementation as a gist on github. It can be run as follows:
python search.py 'multi word search string' file.txt
There is one main concern with the linked code, I didn't code a workaround both for performance and complexity reasons. Can you figure it out? (Spoiler: try to search for a sentence whose first word appears two times in a row in the file)
* I didn't perform any testing on my own, but this article and the python wiki suggest that string concatenation is not that efficient in python (don't know how actual the information is).
There may be a better way of doing it, but my suggestion would be to start by taking in two lines (let's call them line1 and line2), concatenating them into line3 or something similar, and then search that resultant line.
Then you'd assign line2 to line1, get a new line2, and repeat the process.
Use the flag re.MULTILINE when compiling your expressions: http://docs.python.org/library/re.html#re.MULTILINE
Then use \s to represent all white space (including new lines).

Categories