Insert stamp PDF with different position uipath and python? - python

I have some PDF files, I want to stamp on those PDF files but the location is not the same, is there any way to find the location in the file and stamp on that PDF? I use Uipath and Python
I still haven't found a solution yet

disclaimer: I am the author of borb the library used in this answer.
From what I understand of your question, you want to find a certain word on the page, and add a stamp on top of that.
Let's split that in two parts:
Finding the position of a word on the page
#!chapter_005/src/snippet_006.py
import typing
from borb.pdf import Document
from borb.pdf import PDF
from borb.toolkit import RegularExpressionTextExtraction
def main():
# read the Document
# fmt: off
doc: typing.Optional[Document] = None
l: RegularExpressionTextExtraction = RegularExpressionTextExtraction("[lL]orem .* [dD]olor")
with open("output.pdf", "rb") as in_file_handle:
doc = PDF.loads(in_file_handle, [l])
# fmt: on
# check whether we have read a Document
assert doc is not None
# print matching groups
for i, m in enumerate(l.get_matches()[0]):
print("%d %s" % (i, m.group(0)))
for r in m.get_bounding_boxes():
print(
"\t%f %f %f %f" % (r.get_x(), r.get_y(), r.get_width(), r.get_height())
)
if __name__ == "__main__":
main()
In this snippet we use RegularExpressionTextExtraction to process the Page events (rendering text, images, etc). This class acts as an EventListener, and keeps track of which text (being rendered) matches the given regex.
We can then print that text, and its position.
Putting a stamp on a page, at a given position
In the next snippet, we are going to:
create a PDF containing some text
add a rubber stamp (annotation) on that page, at precise coordinates
You can of course modify this snippet to only add the stamp, and to work from an existing PDF (rather than create one).
#!chapter_006/src/snippet_005.py
from decimal import Decimal
from borb.pdf.canvas.layout.annotation.rubber_stamp_annotation import (
RubberStampAnnotation,
RubberStampAnnotationIconType,
)
from borb.pdf.canvas.geometry.rectangle import Rectangle
from borb.pdf import SingleColumnLayout
from borb.pdf import PageLayout
from borb.pdf import Paragraph
from borb.pdf import Document
from borb.pdf import Page
from borb.pdf.page.page_size import PageSize
from borb.pdf import PDF
def main():
doc: Document = Document()
page: Page = Page()
doc.add_page(page)
layout: PageLayout = SingleColumnLayout(page)
layout.add(
Paragraph(
"""
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
"""
)
)
# This is where the stamp is added
page_width: Decimal = PageSize.A4_PORTRAIT.value[0]
page_height: Decimal = PageSize.A4_PORTRAIT.value[1]
s: Decimal = Decimal(100)
page.add_annotation(
RubberStampAnnotation(
Rectangle(
page_width / Decimal(2) - s / Decimal(2),
page_height / Decimal(2) - s / Decimal(2),
s,
s,
),
name=RubberStampAnnotationIconType.CONFIDENTIAL,
)
)
# store
with open("output.pdf", "wb") as out_file_handle:
PDF.dumps(out_file_handle, doc)
if __name__ == "__main__":
main()
The result should be something like this:
In order to change the appearance of the stamp, I encourage you to check out the documentation.

Related

Saving a redacted PDF file in Python to mask underneath text

I read in a PDF file in Python, added a text box on top of the text that I'd like to redact, and saved the change in a new PDF file. When I searched for the text in the redacted PDF file using a PDF reader, the text can still be found.
Is there a way to save the PDF as a single layer file? Or is there a way to ensure that the text under the text box can be removed?
import PyPDF2
import re
import fitz
import io
import os
import pandas
import numpy as np
from PyPDF2 import PdfFileReader, PdfFileWriter
from reportlab.pdfgen import canvas
from reportlab.lib.pagesizes import A4
from reportlab.graphics import renderPDF
from reportlab.lib import colors
from reportlab.graphics.shapes import *
reader = PyPDF2.PdfReader(files)
packet = io.BytesIO()
can = canvas.Canvas(packet, pagesize = A4)
can.rect(65, 750, 40, 30, stroke=1, fill=1)
can.setFillColorRGB(1, 1, 1)
can.save()
packet.seek(0)
new_pdf = PdfFileReader(packet)
output = PyPDF2.PdfFileWriter()
pageToOutput = reader.getPage(1)
pageToOutput.mergePage(new_pdf.getPage(0))
output.addPage(pageToOutput)
outputStream = open('NewFile.pdf', "wb")
output.write(outputStream)
outputStream.close()
I used one of the solutons (pdf2image and PIL) in the link provided by #Matt Pitken, and it worked well.
Disclaimer: I am the author of borb, the library used in this answer
Redaction in PDF is done through annotations.
You can think of annotations as "something I added later to the PDF". For instance a post-it note with a remark.
Redaction annotations are basically a post-it with the implied meaning "this content needs to be removed from the PDF"
In borb, you can add redaction annotations and then apply them.
This is purposefully a two-step process. The idea being that you can send the document (with annotations) to someone else, and ask them to review it (e.g. "Did I remove all the content that needed to be removed?)
Once your document is ready, you can apply the redaction annotations which will effectively remove the content.
Step 1 (creating a PDF with content, and redaction annotations):
from decimal import Decimal
from borb.pdf.canvas.layout.annotation.redact_annotation import RedactAnnotation
from borb.pdf.canvas.geometry.rectangle import Rectangle
from borb.pdf import SingleColumnLayout
from borb.pdf import PageLayout
from borb.pdf import Paragraph
from borb.pdf import Document
from borb.pdf import Page
from borb.pdf import PDF
def main():
doc: Document = Document()
page: Page = Page()
doc.add_page(page)
layout: PageLayout = SingleColumnLayout(page)
layout.add(
Paragraph(
"""
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
"""
)
)
page.add_annotation(
RedactAnnotation(
Rectangle(Decimal(405), Decimal(721), Decimal(40), Decimal(8)).grow(
Decimal(2)
)
)
)
# store
with open("output.pdf", "wb") as out_file_handle:
PDF.dumps(out_file_handle, doc)
if __name__ == "__main__":
main()
Of course, you can simply open an existing PDF and add a redaction annotation.
Step 2 (applying the redaction annotation):
import typing
from borb.pdf import Document
from borb.pdf import PDF
def main():
doc: typing.Optional[Document] = None
with open("output.pdf", "rb") as pdf_file_handle:
doc = PDF.loads(pdf_file_handle)
# apply redaction annotations
doc.get_page(0).apply_redact_annotations()
# store
with open("output.pdf", "wb") as out_file_handle:
PDF.dumps(out_file_handle, doc)
if __name__ == "__main__":
main()

How to extract specific portion of text from file in Python?

I would like to extract a specific portion from a text.
For example, I have this text:
"*Lorem ipsum dolor sit amet, consectetur adipisci elit, sed do eiusmod tempor incidunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, quis nostrum exercitationem ullamco laboriosam, nisi ut aliquid ex ea commodi consequatur.
Duis aute irure reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Excepteur sint obcaecat cupiditat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum*",
I would like to extract the content from "Duis aute" to the start a new line ("nulla pariatur").
How could I do this in Python? Thanks in advance to everyone.
Sorry for poor English.
You can use this.
with open('filename.txt') as f: # open file and get the data.
data = f.read()
s_index = data.index('Duis aute') # get the starting index of text.
e_index = data.index('.',s_index) # get the end index of text here I also pass s_index as the parameter because I want the index of the dot after the starting index.
text = data[s_index:e_index]
print(text)
Output
Duis aute irure reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur
If you want to end the text by \n Then use this one
with open('filename.txt') as f:
data = f.readlines()
data = ''.join(data)
# here try and except for because if the substring not in the string then it will throw an error.
try:
s_index = data.index('Duis aute')
e_index = data.index('\n',s_index)
except:
print('Value Not Found.')
else:
text = data[s_index:e_index]
print(text)
Testing
with open('filename.txt') as f:
data = f.readlines()
data = ''.join(data)
# here try and except for because if the substring not in the string then it will throw an error.
try:
s_index = data.index('ipsum dolor')
e_index = data.index('\n',s_index)
except:
print('Value Not Found.')
else:
text = data[s_index:e_index]
print(text)
output
ipsum dolor sit amet, consectetur adipisci elit, sed do eiusmod tempor incidunt ut labore et dolore magna aliqua.
with open('filename.txt') as f:
data = f.readlines()
data = ''.join(data)
# here try and except for because if the substring not in the string then it will throw an error.
try:
s_index = data.index('Ut enim ad minim')
e_index = data.index('\n',s_index)
except:
print('Value Not Found.')
else:
text = data[s_index:e_index]
print(text)
output
Ut enim ad minim veniam, quis nostrum exercitationem ullamco laboriosam, nisi ut aliquid ex ea commodi consequatur.
And If you need only one word after the given word then use this.
with open('filename.txt') as f:
data = f.readlines()
data = ''.join(data)
# here try and except for because if the substring not in the string then it will throw an error.
try:
s_index = data.index('Lorem')
e_index = data.index(' ',s_index+len('Lorem')+1)
except:
print('Value Not Found.')
else:
text = data[s_index:e_index]
print(text)
output
Lorem ipsum
If you are trying to extract a particular "sentence" - then one way could be to split on the sentence separator (\n for example)
sentences = s.split('\n')
If you have multiple delimiters for a sentence - you can use the re module -
import re
sentences = re.split(r'\.|\n', s)
You can then extract the matches from sentences -
required = '\n'.join(_ for _ in sentences if _.strip().startswith('Duis aute'))
Of course, you can combine all of this into a single liner -
'\n'.join(_ for _ in s.split('.') if _.strip().startswith('Duis aute'))

Python snippet to manage regex replacement index map?

For text processing task i need to apply multiple regex substitutions (i.e. re.sub). There are multiple regex patterns with custom replacement parameters. The result needs to be original text, text with replacements and a map of tuples identifying start,end indices of replaced strings in source text and indices in result text.
e.g.
following is a sample code having input text and an array of 3 modifier tuples.
text = '''
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. On Apr. 6th, 2009 Ut enim culpa minim veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex 5 ea commodo consequat. Duis aute irure dolor in reprehenderit in
voluptate velit esse cillum dolore eu fugiat nulla pariatur. On June 23rd, 3004 excepteur sint occaecat
cupidatat non proident, sunt in culpa qui officia deserunt 6 mollit anim id est laborum.
'''
modifiers = [
(
r'([\w]+\.?)\s+(\d{1,2})\w{2},\s+(\d{4})',
{ 1:lambda x:month(x), 2:lambda x:num2text(x), 3:lambda x:num2text(x) }
),
(
r' (\d) ',
{ 1:lambda x:num2text(x) }
),
(
r'(culpa)',
{ 1: 'culpae' }
)
]
sample output index map:
[((7, 11), (7, 30)), ((12, 14), (31, 35)), ((20, 22), (41, 51)), ((23, 28), (52, 57)),...]
Already wrote a complicated function, that tries to handle all the corner cases of the index offsetting happening during replacements, but it's already taking too much time.
Maybe there is already a solution for this task?
Here is a demo of current state.
Word transformation expansion (normalization) functions were intentionally made simplistic with fixed value dict mapping.
The ultimate goal is to make a text dataset generator. Dataset needs to have two text parts - one with numbers abbreviations and other expandable strings and the other with fully expanded into full textual representation (e.g. 3->three, apr. -> april, etc.) And also offset mapping to link parts of non-expanded text with corresponding parts in expanded text.
One of the corner cases that my implementation already deals with is a case when there are at least two modifiers A and B and they have to deal with text like 'text text a text b text a text b' as first modifier churns out output span of the second 'a' replacement becomes incorrect as B modifier comes in and alters output text before second 'a'.
Also partially dealt with case where subsequent modifier replaces output replacement from first modifier and figures out the initial source span location.
UPDATE
Writing a python package called re-map.
One might also consider spacy mentioned here.
Here is a code example that handles your text modifiers using re, datetime and a third party package called inflect.
The code will return the modified text with the position of the modified words.
PS: You need to explain more what you're trying to do. Otherwise you can use this code and modify it to fulfill your needs.
To install inflect: pip install inflect
Sample code:
import re
from datetime import datetime
import inflect
ENGINE = inflect.engine()
def num2words(num):
"""Number to Words using inflect package"""
return ENGINE.number_to_words(num)
def pretty_format_date(pattern, date_found, text):
"""Pretty format dates"""
_month, _day, _year = date_found.groups()
month = datetime.strptime('{day}/{month}/{year}'.format(
day=_day, month=_month.strip('.'), year=_year
), '%d/%b/%Y').strftime('%B')
day, year = num2words(_day), num2words(_year)
date = '{month} {day}, {year} '.format(month=month, day=day, year=year)
begin, end = date_found.span()
_text = re.sub(pattern, date, text[begin:end])
text = text[:begin] + _text + text[end:]
return text, begin, end
def format_date(pattern, text):
"""Format given string into date"""
spans = []
# For loop prevents us from going into an infinite loop
# If there is malformed texts or bad regex
for _ in re.findall(pattern, text):
date_found = re.search(pattern, text)
if not date_found:
break
try:
text, begin, end = pretty_format_date(pattern, date_found, text)
spans.append([begin, end])
except Exception:
# Pass without any modification if there is any errors with date formats
pass
return text, spans
def number_to_words(pattern, text):
"""Numer to Words with spans"""
spans = []
# For loop prevents us from going into an infinite loop
# If there is malformed texts or bad regex
for _ in re.findall(pattern, text):
number_found = re.search(pattern, text)
if not number_found:
break
_number = number_found.groups()
number = num2words(_number)
begin, end = number_found.span()
spans.append([begin, end])
_text = re.sub(pattern, number, text[begin:end])
text = text[:begin] + ' {} '.format(_text) + text[end:]
return text, spans
def custom_func(pattern, text, output):
"""Custom function"""
spans = []
for _ in re.findall(pattern, text):
_found = re.search(pattern, text)
begin, end = _found.span()
spans.append([begin, end])
_text = re.sub(pattern, output, text[begin:end])
text = text[:begin] + ' {} '.format(_text) + text[end:]
return text, spans
text = '''
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. On Apr. 6th, 2009 Ut enim culpa minim veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex 5 ea commodo consequat. Duis aute irure dolor in reprehenderit in
voluptate velit esse cillum dolore eu fugiat nulla pariatur. On June 23rd, 3004 excepteur sint occaecat
cupidatat non proident, sunt in culpa qui officia deserunt 6 mollit anim id est laborum.
'''
modifiers = [
(
r'([\w]+\.?)\s+(\d{1,2})\w{2},\s+(\d{4})',
format_date
),
(
r' (\d) ',
number_to_words
),
(
r'( \bculpa\b)', # Better using this pattern to catch the exact word
'culpae'
)
]
for regex, func in modifiers:
if not isinstance(func, str):
print('\n{} {} {}'.format('#' * 20, func.__name__, '#' * 20))
_text, spans = func(regex, text)
else:
print('\n{} {} {}'.format('#' * 20, func, '#' * 20))
_text, spans = custom_func(regex, text, func)
print(_text, spans)
Output:
#################### format_date ####################
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. On April six, two thousand and nine Ut enim culpa minim veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex 5 ea commodo consequat. Duis aute irure dolorin reprehenderit in
voluptate velit esse cillum dolore eu fugiat nulla pariatur. On June 23rd, 3004 excepteur sint occaecat
cupidatat non proident, sunt in culpa qui officia deserunt 6 mollit animid est laborum.
[[128, 142]]
#################### number_to_words ####################
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. On Apr. 6th, 2009 Ut enim culpa minim veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex five ea commodo consequat. Duis aute irure dolor in reprehenderit in
voluptate velit esse cillum dolore eu fugiat nulla pariatur. On June 23rd, 3004 excepteur sint occaecat
cupidatat non proident, sunt in culpa qui officia deserunt six mollit anim id est laborum.
[[231, 234], [463, 466]]
#################### culpae ####################
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua. On Apr. 6th, 2009 Ut enim culpae minim veniam, quis nostrud exercitation ullamco
laboris nisi ut aliquip ex 5 ea commodo consequat. Duis aute irure dolorin reprehenderit in
voluptate velit esse cillum dolore eu fugiat nulla pariatur. On June 23rd, 3004 excepteur sint occaecat
cupidatat non proident, sunt in culpae qui officia deserunt 6 mollit anim id est laborum.
[[150, 156], [435, 441]]
Demo on Replit
Wrote a re-map python library to solve the problem described.
Here is a demo.

Unable to identify text segments based on keywords

I have potentially large amount of text output coming from an application. The output can be broken up into different sections. And I would like to determine which section I am processing based on existence of 1 or more keywords or key phrases.
Dummy Example output:
******************
** MyApp 1.1 **
** **
******************
**Copyright **
******************
Note # 1234
Text of the note 1234
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
******************
INPUT INFO:
Number of data points: 123456
Number of cases: 983
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
******************
Analysis Type: Simple
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
******************
Results:
Data 1: 1234e-10
Data 2
------
1 2
2 3.4
Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
*******************
CPU TIME: 1:01:12
WALL TIME: 1:04:23
*******************
So I created a dictionary like and am trying to look up the values of the dict in each chunk.
def process(this_chunk):
keys = ['banner_k','runSummary','inputSummary_k']
vals = [['MyApp','Copyright'],['CPU TIME'],['Number of data']]
for k,v in zip(keys,vals):
chunkdict[k]=v
for k,v in chunkdict.items():
if any(x in v for x in this_chunk.splitlines()):
print(k + " is in this chunk")
process_for_k(chunk) #Function for each specific section.
break
else:
print(k + " is not in this chunk")
return
But this does not identify all the chunks. The values are indeed present but only in 1 chunk the values are matched. To be specific, my real application has the exact words for 'CPU TIME' and 'Copyright' in its output.
The section with 'CPU TIME' is captured correctly but the section with 'Copyright' is not found.
Is this the right approach to identifying sections with known keywords?
Any ideas why this (if any(x in v for x in this_chunk.splitlines()):)cmight not work?

Any way to search zlib-compressed text?

For a project I have to store a great deal of text and I was hoping to keep the database size small by zlib-compressing the text. Is there a way to search zlib-compressed text by testing for substrings without decompressing?
I would like to do something like the following:
>>> import zlib
>>> lorem = zlib.compress("Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.")
>>> test_string = zlib.compress("Lorem")
>>> test_string in lorem
False
No. You cannot compress a short string and expect to find the result of that compression in the compressed version of a file that contains that original short string. Compression codes the data differently depending on the data that precedes it. In fact, that's how most compressors work -- by using the preceding data for matching strings and statistical distributions.
To search for a string, you have to decompress the data. You do not have to store the decompressed data though. You can read in the compressed data and decompress on the fly, discarding that data as you go until you find your string or get to the end. If the compressed data is very large and on slow mass media, this may be faster than searching for the string in the same data uncompressed on the same media.

Categories