I have a question regarding the splitting of pdf files. basically I have a collection of pdf files, which files I want to split in terms of paragraph. so to each paragraph of the pdf file to be a file on its own. I would appreciate if you can help me with this, preferably in Python, but if that is not possible any language will do.
You can use pdftotext for the above, wrap it in python subprocess. Alternatively you could use some other library which already do it implicitly like textract. Here is a quick example, Note: I have used 4 spaces as delimiter to convert the text to paragraph list, you might want to use different technique.
import re
import textract
#read the content of pdf as text
text = textract.process('file_name.pdf')
#use four space as paragraph delimiter to convert the text into list of paragraphs.
print re.split('\s{4,}',text)
Related
I am new to Python. I used to use Uipath to read in CSV files and extract certain information using regex expression.
However, when I tried in Python I get error "NoneType object has no attribute 'group'", which mean a match is not found.
But my regex expression works fine in Uipath.
Python Code:
text_lines=f.readlines()
model_regex=re.compile(r'(?<=model choice :).*')
model=model_regex.search('text_lines')
print(model.group())
do I have to place my variable (text_line) in quotation marks here?
model=model_regex.search('text_lines')
I basically open a csv file and read it all the text using
text_lines=f.readlines()
But will text_lines look exactly like in the csv file? I noted when I use print(text_lines), all the lines look jumbled up.
Is there a way to read in text_lines to look exactly like in the csv files so that my regex expression work?
Or is it for Python , I need to loop line by line of the text_lines to look for the regex expression?
Thank you
I have a lot of doc files that I have to convert into Dataframes. My doc files cannot be converted directly because I get an error message:
Test.doc' is not a Word file, content type is 'application/vnd.openxmlformats-officedocument.themeManager+xml.
If I convert my doc files into docx, I can extract the data into a dataframe.
The problem is that the function I'm using requires a path to a docx file (output_file) for converting and I have to remove the docx file through code at the end. I'd prefer to store the docx data in memory and extract data from it to the dataframe. I've tried BytesIO, IOBase, NamedTemporaryFile, Temporary zip etc. with no success.
If there's a way to convert doc to dataframe directly, that'd make things a lot easier (I've tried most of the popular libraries) or please let me know the temporary file option. I'm attaching my function below.
word = comtypes.client.CreateObject('Word.Application')
doc = word.Documents.Open(input_file)
doc.SaveAs(output_file, FileFormat=16)
return_dataframe = docx_to_dataframe(output_file)
doc.Close()
word.Quit()
os.remove(output_file)
I have a similar use case, and this is the solution I came up with until I find something better...
I basically needed to 1) decode the doc files from base64 format 2)read the 'file' in memory which results in a mix of characters in unicode. 3) use regex to capture the text. Here is how I did it:
import olefile
#retrieve base64 image and decode into bytes, in this case from a df
message = row['text']
text_bytes = message.encode('ascii')
decoded = base64.decodebytes(text_bytes)
#write in memory
result = BytesIO()
result.write(decoded)
#open and read file
ole=olefile.OleFileIO(result)
y = ole.openstream('WordDocument').read()
y=y.decode('latin-1',errors='ignore')
#replace all characters that are not part of the unicode list below (all latin characters) and spaces with an Astrisk. This can probably be shortened using a similar pattern used in the next step and combining them
y=(re.sub(r'[^\x0A,\u00c0-\u00d6,\u00d8-\u00f6,\u00f8-\u02af,\u1d00-\u1d25,\u1d62-\u1d65,\u1d6b-\u1d77,\u1d79-\u1d9a,\u1e00-\u1eff,\u2090-\u2094,\u2184-\u2184,\u2488-\u2490,\u271d-\u271d,\u2c60-\u2c7c,\u2c7e-\u2c7f,\ua722-\ua76f,\ua771-\ua787,\ua78b-\ua78c,\ua7fb-\ua7ff,\ufb00-\ufb06,\x20-\x7E]',r'*', y))
#Isolate the body of the text from the rest of the gibberish
p=re.compile(r'\*{300,433}((?:[^*]|\*(?!\*{14}))+?)\*{15,}')
result=(re.findall(p, y))
#remove * left in the capture group
result = result[0].replace('*','')
For me, I needed to make sure that during decoding, accent characters are not lost, and since my documents are in English, Spanish, and Portuguese, I opted to decode using latin-1. From there I use regex patterns to identify the text needed. After decoding, I found that in all of my documents, the capture group is preceeded by ~400 '*' and a ':' . Unsure if this is the norm for all doc documents when decoding using this method, but I used this as a starting point to create a regex pattern to isolate the text needed from the rest of the gibberish.
I hope you can help out a new learner of Python. I could not find my problem in other questions, but if so: apologies. What I basically want to do is this:
Read a large number of text files and search each for a number of string terms.
If the search terms are matched, store the corresponding file name to a new file called "filelist", so that I can tell the good files from the bad files.
Export "filelist" to Excel or CSV.
Here is the code that I have so far:
#textfiles all contain only simple text e.g. "6 Apples"
filelist=[]
for file in os.listdir('C:/mydirectory/'):
with open('C:/mydirectory/' + file, encoding="Latin1") as f:
fine=f.read()
if re.search('APPLES',fine) or re.search('ORANGE',fine) or re.search('BANANA',fine):
filelist.append(file)
listoffiles = pd.DataFrame(filelist)
writer = pd.ExcelWriter('ListofFiles.xlsx', engine='xlsxwriter')
listoffiles.to_excel(writer,sheet_name='welcome',index=False)
writer.save()
print(filelist)
Questions:
Surely, there is a more elegant or time-efficient way? I need to do this for a large amount of files :D
Related to the former, is there a way to solve the reading-in of files using pandas? Or is it less time efficient? For me as a STATA user, having a dataframe feels a bit more like home....
I added the "Latin1" option, as some characters in the raw data create conflict in encoding. Is there a way to understand which characters are causing the problem? Can I get rid of this easily, e.g. by cutting of the first line beforehand (skiprow maybe)?
Just couple of things to speed up the script:
1.) compile your regex beforehand, not every time in the loop (also use | to combine multiple strings to one regex!
2.) read files line by line, not all at once!
3.) Use any() to terminate search when you get first positive
For example:
import re
import os
filelist=[]
r = re.compile(r'APPLES|ORANGE|BANANA') # you can add flags=re.I for case insensitive search
for file in os.listdir('C:/mydirectory/'):
with open('C:/mydirectory/' + file, 'r', encoding='latin1') as f:
if any(r.search(line) for line in f): # read files line by line, not all content at once
filelist.append(file) # add to list
# convert list to pandas, etc...
I have a folder that contains thousands of raw html code. I would like to extract all the href from each page. What would be the fastest way to do that?
href="what_i_need_here"
import re
with open('file', 'r') as f:
print (re.findall(r"href=\"(.+?)\"\n", ''.join(f.readlines())))
This would be what I guess might work, but there's no way to tell since you didn't provide any information. The regex used is href="(.+?)"\n. I read the content using f.readlines(), then combined it into a line to search using ''.join. See if it works, or add examples of the text.
I am running python 3.3 in Windows and I need to pull strings out of Word documents. I have been searching far and wide for about a week on the best method to do this. Originally I tried to save the .docx files as .txt and parse through using RE's, but I had some formatting problems with hidden characters - I was using a script to open a .docx and save as .txt. I am wondering if I did a proper File>SaveAs>.txt would it strip out the odd formatting and then I could properly parse through? I don't know but I gave up on this method.
I tried to use the docx module but I've been told it is not compatible with python 3.3. So I am left with using pywin32 and the COM. I have used this successfully with Excel to get the data I need but I am having trouble with Word because there is FAR less documentation and reading through the object model on Microsoft's website is over my head.
Here is what I have so far to open the document(s):
import win32com.client as win32
import glob, os
word = win32.gencache.EnsureDispatch('Word.Application')
word.Visible = True
for infile in glob.glob(os.path.join(r'mypath', '*.docx')):
print(infile)
doc = word.Documents.Open(infile)
So at this point I can do something like
print(doc.Content.Text)
And see the contents of the files, but it still looks like there is some odd formatting in there and I have no idea how to actually parse through to grab the data I need. I can create RE's that will successfully find the strings that I'm looking for, I just don't know how to implement them into the program using the COM.
The code I have so far was mostly found through Google. I don't even think this is that hard, it's just that reading through the object model on Microsoft's website is like reading a foreign language. Any help is MUCH appreciated. Thank you.
Edit: code I was using to save the files from docx to txt:
for path, dirs, files in os.walk(r'mypath'):
for doc in [os.path.abspath(os.path.join(path, filename)) for filename in files if fnmatch.fnmatch(filename, '*.docx')]:
print("processing %s" % doc)
wordapp.Documents.Open(doc)
docastxt = doc.rstrip('docx') + 'txt'
wordapp.ActiveDocument.SaveAs(docastxt,FileFormat=win32com.client.constants.wdFormatText)
wordapp.ActiveDocument.Close()
If you don't want to learn the complicated way Word models documents, and then how that's exposed through the Office object model, a much simpler solution is to have Word save a plain-text copy of the file.
There are a lot of options here. Use tempfile to create temporary text files and then delete them, or store permanent ones alongside the doc files for later re-use? Use Unicode (which, in Microsoft speak, means UTF-16-LE with a BOM) or encoded text? And so on. So, I'll just pick something reasonable, and you can look at the Document.SaveAs, WdSaveFormat, etc. docs to modify it.
wdFormatUnicodeText = 7
for infile in glob.glob(os.path.join(r'mypath', '*.docx')):
print(infile)
doc = word.Documents.Open(infile)
txtpath = os.path.splitext('infile')[0] + '.txt'
doc.SaveAs(txtpath, wdFormatUnicodeText)
doc.Close()
with open(txtpath, encoding='utf-16') as f:
process_the_file(f)
As noted in your comments, what this does to complex things like tables, multi-column text, etc. may not be exactly what you want. In that case, you might want to consider saving as, e.g., wdFormatFilteredHTML, which Python has nice parsers for. (It's a lot easier to BeautifulSoup a table than to win32com-Word it.)
oodocx is my fork of the python-docx module that is fully compatible with Python 3.3. You can use the replace method to do regular expression searches. Your code would look something like:
from oodocx import oodocx
d = oodocx.Docx('myfile.docx')
d.replace('searchstring', 'replacestring')
d.save('mynewfile.docx')
If you just want to remove strings, you can pass an empty string to the "replace" parameter.