Python: How to read text between two empty lines into a string - python

I'm a beginner at programming and Python, and I'm writing a script to do stuff with .srt subtitle files. My problem is that I don't know how to: read through a file, and analyze text first between the beginning of the text and the first empty line and then between that empty line and the next empty line till the end of the file ("analyze" by e.g. calculate the length of a part of it, convert another part to numbers etc.).
You can read about the .srt format specification and see an example here (type: Plain); there's an empty line at the end of the file. I want to compare the display time/duration of each subtitle against the number of characters in it. Starting from the beginning of the file, each subtitle (with its number, duration info and text) is separated from the next one by an empty line (a "\n", I can find them with sth like if "\n" in line and len(line) == 2:). The time codes always contain a "-->" and always end in three digits, so if I have that in a string, I can figure out where it is. The problem is, I need to somehow do the following:
Read the subtitle text, which can be 1-3 lines with line breaks, calculate its character length.
Read the duration, convert to duration in seconds.
Read the line number (to be able to output it somewhere with my results, e.g. "duration of line 44 is 4.54 s").
I can do the second easily, but I'm not sure how to go over the whole file and tell Python: find the end of each subtitle's text, calculate the length of characters in each line, add that, read the duration, divide these, output this with the line number, and do the same with the next subtitle until you reach the end of the file. If it was one subtitle, I could do it easily, but I'm not sure how to do that check on a single one and then seek the next one. I've been looking for 2 hours for this and can't find anything like that.

Regular Expressions can be a powerful tool to help solve this type of processing.
You can use a regular expression to match or parse a single record or against the entire file.
If you don't know about Regex in python, I highly recommend you do some tutorials on the topic... and that should give you plenty of ideas how it can be applied to your problem.
There are many great references on the topic, but here is just one: http://www.diveintopython.net/regular_expressions/

Related

How to save data to a file on separate items instead of one long string?

I am having trouble simply saving items into a file for later reading. When I save the file, instead of listing the items as single items, it appends the data together as one long string. According to my Google searches, this should not be appending the items.
What am I doing wrong?
Code:
with open('Ped.dta','w+') as p:
p.write(str(recnum)) # Add record number to top of file
for x in range(recnum):
p.write(dte[x]) # Write date
p.write(str(stp[x])) # Write Steps number
Since you do not show your data or your output I cannot be sure. But it seems you are trying to use the write method like the print function, but there are important differences.
Most important, write does not follow its written characters with any separator (like space by default for print) or end (like \n by default for print).
Therefore there is no space between your data and steps number or between the lines because you did not write them and Python did not add them.
So add those. Try the lines
p.write(dte[x]) # Write date
p.write(' ') # space separator
p.write(str(stp[x])) # Write Steps number
p.write('\n') # line terminator
Note that I do not know the format of your "date" that is written, so you may need to convert that to text before writing it.
Now that I have the time, I'll implement #abarnert's suggestion (in a comment) and show you how to get the advantages of the print function and still write to a file. Just use the file= parameter in Python 3, or in Python 2 after executing the statement
from __future__ import print_function
Using print you can do my four lines above in one line, since print automatically adds the space separator and newline end:
print(dte[x], str(stp[x]), file=p)
This does assume that your date datum dte[x] is to be printed as text.
Try adding a newline ('\n') character at the end of your lines as you see in docs. This should solve the problem of 'listing the items as single items', but the file you create may not be greatly structured nonetheless.
For further of your google searches you may want to check serialization, as well as json and csv formats, covered in python standard library.
You question would have befited if you gave very small example of recnum variable + original f.close() is not necessary as you have a with statement, see here at SO.

Python - Dividing a book in PDF form into individual text files that correspond with page numbers

I've converted my PDF file into a long string using PDFminer.
I'm wondering how I should go about dividing this string into smaller, individual strings/pages. Each page is divided by a certain series of characters (CRLF, FF, page number etc), and the string should be split and appended to a new text file according to these characters occurring.
I have no experience with regex, but is using the re module the best way to go about this?
My vague idea for implementation is that I have to iterate through the file using the re.search function, creating text files with each new form feed found. The only code I have is PDF > text conversion. Can anyone point me in the right direction?
Edit: I think the expression I should use is something like ^.*(?=(\d\n\n\d\n\n\f\bFavela\b)) (capture everything before 2 digits, the line breaks and the book's title 'Favela' which appears on top of each page.
Can I save these \d digits as variables? I want to use them as file names, as I iterate through the book and scoop up the portions of text divided by each appearance of \f\Favela.
I'm thinking the re.sub method would do it, looping through and replacing with an empty string as I go.

Python: Read in Data from File

I have to read data from a text file from the command line. It is not too difficult to read in each line, but I need a way to separate each part of the line.
The file contains the following in order for several hundred lines:
String (Sometimes more than 1 word)
Integer
String (Sometimes more than 1 word)
Integer
So for example the input could have:
Hello 5 Sample String 10
The current implementation I have for reading in each line is as follows... how can I modify it to separate it into what I want? I have tried splitting the line, but I always end up getting only one character of the first string this way with no integers or any part of the second string.
with open(sys.argv[1],"r") as f:
for line in f:
print(line)
The desired output would be:
Hello
5
Sample String
10
and so on for each line in the file. There could be thousands of lines in the file. I just need to separate each part so I can work with them separately.
The program can't magically split lines the way you want. You will need to read in one line at a time and parse it yourself based on the format.
Since there are two integers and an indeterminate number of (what I assume are) space-delimited words, you may be able to use a regular expression to find the integers then use them as delimiters to split up the line.

How to exclude \n and \r from tell() count in Python 2.7

I want to keep track of the file pointer on a simple text file (just a few lines), after having used readline() on it. I observed that the tell() function also counts the line endings.
My questions:
How to instruct the code to skip counting the line endings ?
How to do the first question regardless the line ending type (to work the same in case the text file uses just \n, or just \r, or both) ?
You are navigating into trouble.
DOn't do that: either use the number "tell" tells you about, or count what you have in memory, regardless of the file contents.
You won't be able to correlate a position in text, read in memory, to a physicall place in a text file: text files are not meant for that. They are meant to be read one line at a time, or in whole: your pogram consumes the text, and let the OS worry about the file position.
You can open your file in binary mode, read its contents as they are into memory, and have some method of retrieving readable text from those contents as needed - doing this with a proper class can make it not that messy.
Consider the problem you already have with the line-endings which could be either "\n" or "\r\n" and still count as a single character, and now, imagine that situation one hundred fold more complex if the file has a single utf-8 encoded character that takes more than one byte to encode.
And even in binary files, knowing the absolute file pointer position can only be useful in a handful situations where, usually, one would be better using a database engine to start with.
tell is tell. It counts the number of bytes from the start of the file to the cursor. \n and \r are bytes, so they get counted. If you want to count the number of bytes, but not count certain characters, you will have to do it manually:
data_read = … # data you have already read
len([b for b in data_read if b not in '\r\n'])
The bad news is that it's far more annoying to do this than just looking at tell. The good news is that it answers both your questions.
or, I suppose you could do
yourfile.tell() - data_read.count('\r') - data_read.count('\n')
result = re.sub("[\r\n]", "", subject)
http://regex101.com/r/kM6dA1
Match a single character present in the list below «[\r\n]»
A carriage return character «\r»
A line feed character «\n»

How to search for string in Python by removing line breaks but return the exact line where the string was found?

I have a bunch of PDF files that I have to search for a set of keywords against. I have to extract the exact line where the keyword was found. I first used xpdf's pdf2text to convert the file to PDF. (Tried solr but had a tough time tailoring the output/schema to suit my requirement).
import sys
file_name = sys.argv[1]
searched_string = sys.argv[2]
result = [(line_number+1, line) for line_number, line in enumerate(open(file_name)) if searched_string.lower() in line.lower()]
#print result
for each in result:
print each[0], each[1]
ThinkCode:~$ python find_string.py sample.txt "String Extraction"
The problem I have with this is that for cases where search string is broken towards the end of the line :
If you are going to index large binary files, remember to change the
size limits. String
Extraction is a common problem
If I am searching for 'String Extraction', I will miss this keyword if I use the code presented above. What is the most efficient way of achieving this without making 2 copies of text file (one for searching the keyword to extract the line (number) and the other for removing line breaks and finding the keyword to eliminate the case where the keyword spans across 2 lines).
Much appreciated guys!
Note: Some considerations without any code, but I think they belong to an answer rather than to a comment.
My idea would be to search only for the first keyword; if a match is found, search for the second. This allows you to, if the match is found at the end of the line, take into consideration the next line and do line concatenation only if a match is found in first place*.
Edit:
Coded a simple example and ended up using a different algorithm; the basic idea behind it is this code snippet:
def iterwords(fh):
for number, line in enumerate(fh):
for word in re.split(r'\s+', line.strip()):
yield number, word
It iterates over the file handler and produces a (line_number, word) tuple for each word in the file.
The matching afterwards becomes pretty easy; you can find my implementation as a gist on github. It can be run as follows:
python search.py 'multi word search string' file.txt
There is one main concern with the linked code, I didn't code a workaround both for performance and complexity reasons. Can you figure it out? (Spoiler: try to search for a sentence whose first word appears two times in a row in the file)
* I didn't perform any testing on my own, but this article and the python wiki suggest that string concatenation is not that efficient in python (don't know how actual the information is).
There may be a better way of doing it, but my suggestion would be to start by taking in two lines (let's call them line1 and line2), concatenating them into line3 or something similar, and then search that resultant line.
Then you'd assign line2 to line1, get a new line2, and repeat the process.
Use the flag re.MULTILINE when compiling your expressions: http://docs.python.org/library/re.html#re.MULTILINE
Then use \s to represent all white space (including new lines).

Categories