I'm developing a service in which I now need to extract images from a PDF file. From a Linux command line I can extract images using the Poppler library like this:
pdfimages my_file.pdf /tmp/image
Since I'm using the Python Flask framework and I want to run my service on Heroku I want to extract the images using pure Python (or any library that can run on Heroku in a Flask system).
So does anybody know how I can extract images from pdf in pure Python? I prefer open source solutions, but I'm willing to pay for it if needed (as long as it works under my own control on Heroku).
import minecart
import os
from NumberOfPages import getPageNumber
def extractImages(filename):
# making new directory if it doesn't exist
new_dir_name = filename[:-4]
if not os.path.exists(new_dir_name):
os.makedirs(new_dir_name + '/images')
os.makedirs(new_dir_name + '/text')
# open the target file
pdf_file = open(filename, 'rb')
# parse the document through the minecart. Document function
doc = minecart.Document(pdf_file)
# getting the number of pages in the pdf file.
num_pages = getPageNumber(filename)
# getting the list of all the pages
page = doc.get_page(num_pages)
count = 0
for page in doc.iter_pages():
for i in range(len(page.images)):
try:
im = page.images[i].as_pil() # requires pillow
name = new_dir_name + '/images/image_' + str(count) + '.jpg'
count = count + 1
im.save(name)
except:
print('Error encountered at %s' % filename)
doc_name = new_dir_name + '/images/info.txt'
with open(doc_name, 'a') as x:
print( x.write('Number of images in document: {}'.format(count)))
Related
I am working on a project where I need to scrape images off of the web. To do this, I write the image links to a file, and then I download each of them to a folder with requests. At first, I used Google as the scrape site, but do to several reasons, I have decided that wikipedia is a much better alternative. However, after I tried the first time, many of the images couldn't be opened, so I tried again with the change that when I downloaded the images, I downloaded them to names with endings that matched the endings of the links. More images were able to be accessed like this, but many were still not able to be opened. When I tested downloading the images myself (individually outside of the function), they downloaded perfectly, and when I used my function to download them afterwards, they kept downloading correctly (i.e. I could access them). I am not sure i it is important, but the image endings that I generally come across are svg.png and png. I want to know why this is occurring and what I may be able to do to prevent it. I have left some of my code below. Thank you.
Function:
def download_images(file):
object = file[0:file.index("IMAGELINKS") - 1]
folder_name = object + "_images"
dir = os.path.join("math_obj_images/original_images/", folder_name)
if not os.path.exists(dir):
os.mkdir(dir)
with open("math_obj_image_links/" + file, "r") as f:
count = 1
for line in f:
try:
if line[len(line) - 1] == "\n":
line = line[:len(line) - 1]
if line[0] != "/":
last_chunk = line.split("/")[len(line.split("/")) - 1]
endings = last_chunk.split(".")[1:]
image_ending = ""
for ending in endings:
image_ending += "." + ending
if image_ending == "":
continue
with open("math_obj_images/original_images/" + folder_name + "/" + object + str(count) + image_ending, "wb") as f:
f.write(requests.get(line).content)
file = object + "_IMAGEENDINGS.txt"
path = "math_obj_image_endings/" + file
with open(path, "a") as f:
f.write(image_ending + "\n")
count += 1
except:
continue
f.close()
Doing this outside of it worked:
with open("test" + image_ending, "wb") as f:
f.write(requests.get(line).content)
Example of image link file:
https://upload.wikimedia.org/wikipedia/commons/thumb/6/63/Triangle.TrigArea.svg/120px-Triangle.TrigArea.svg.png
https://upload.wikimedia.org/wikipedia/commons/thumb/c/c9/Square_%28geometry%29.svg/120px-Square_%28geometry%29.svg.png
https://upload.wikimedia.org/wikipedia/commons/thumb/3/33/Hexahedron.png/120px-Hexahedron.png
https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Hypercube.svg/110px-Hypercube.svg.png
https://wikimedia.org/api/rest_v1/media/math/render/svg/5f8ab564115bf2f7f7d12a9f873d9c6c7a50190e
https://en.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1
https:/static/images/footer/wikimedia-button.png
https:/static/images/footer/poweredby_mediawiki_88x31.png
If all the files are indeed in PNG format and the suffix is always .png, you could try something like this:
import requests
from pathlib import Path
u1 = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/63/Triangle.TrigArea.svg/120px-Triangle.TrigArea.svg.png"
r = requests.get(u1)
Path('u1.png').write_bytes(r.content)
My previous answer works for PNG's only
For SVG files you need to check if the file contents start eith the string "<svg" and create a file with the .svg suffix.
The code below saves the downloaded files in the "downloads" subdirectory.
import requests
from pathlib import Path
# urls are stored in a file 'urls.txt'.
with open('urls.txt') as f:
for i, url in enumerate(f.readlines()):
url = url.strip() # MUST strip the line-ending char(s)!
try:
content = requests.get(url).content
except:
print('Cannot download url:', url)
continue
# Check if this is an SVG file
# Note that content is bytes hence the b in b'<svg'
if content.startswith(b'<svg'):
ext = 'svg'
elif url.endswith('.png'):
ext = 'png'
else:
print('Cannot process contents of url:', url)
Path('downloads', f'url{i}.{ext}').write_bytes(requests.get(url).content)
Contents of the urls.txt file:
(the last url is an svg)
https://upload.wikimedia.org/wikipedia/commons/thumb/6/63/Triangle.TrigArea.svg/120px-Triangle.TrigArea.svg.png
https://upload.wikimedia.org/wikipedia/commons/thumb/c/c9/Square_%28geometry%29.svg/120px-Square_%28geometry%29.svg.png
https://upload.wikimedia.org/wikipedia/commons/thumb/3/33/Hexahedron.png/120px-Hexahedron.png
https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Hypercube.svg/110px-Hypercube.svg.png
https://wikimedia.org/api/rest_v1/media/math/render/svg/5f8ab564115bf2f7f7d12a9f873d9c6c7a50190e
I'm trying to write a Python program converting ".pdf" files to ".docx" ones, using Adobe PDF Server API (free trial).
I've found literature enabling to transform any ".pdf" file to a ".zip" file containing ".txt" files (restoring text data) and ".excel" files (returning tabular data).
import logging
import os.path
from adobe.pdfservices.operation.auth.credentials import Credentials
from adobe.pdfservices.operation.exception.exceptions import ServiceApiException, ServiceUsageException, SdkException
from adobe.pdfservices.operation.pdfops.options.extractpdf.extract_pdf_options import ExtractPDFOptions
from adobe.pdfservices.operation.pdfops.options.extractpdf.extract_element_type import ExtractElementType
from adobe.pdfservices.operation.execution_context import ExecutionContext
from adobe.pdfservices.operation.io.file_ref import FileRef
from adobe.pdfservices.operation.pdfops.extract_pdf_operation import ExtractPDFOperation
logging.basicConfig(level=os.environ.get("LOGLEVEL", "INFO"))
try:
# get base path.
base_path =os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath("C:/..link.../extractpdf/extract_txt_from_pdf.ipynb"))))
# Initial setup, create credentials instance.
credentials = Credentials.service_account_credentials_builder()\
.from_file(base_path + "\\pdfservices-api-credentials.json") \
.build()
#Create an ExecutionContext using credentials and create a new operation instance.
execution_context = ExecutionContext.create(credentials)
extract_pdf_operation = ExtractPDFOperation.create_new()
#Set operation input from a source file.
source = FileRef.create_from_local_file(base_path + "/resources/trs_pdf_file.pdf")
extract_pdf_operation.set_input(source)
# Build ExtractPDF options and set them into the operation
extract_pdf_options: ExtractPDFOptions = ExtractPDFOptions.builder() \
.with_element_to_extract(ExtractElementType.TEXT) \
.with_element_to_extract(ExtractElementType.TABLES) \
.build()
extract_pdf_operation.set_options(extract_pdf_options)
#Execute the operation.
result: FileRef = extract_pdf_operation.execute(execution_context)
# Save the result to the specified location.
result.save_as(base_path + "/output/Extract_TextTableau_From_trs_pdf_file.zip")
except (ServiceApiException, ServiceUsageException, SdkException):
logging.exception("Exception encountered while executing operation")
But I can't yet get the conversion done to a ".docx" file, event after changing the name of the extracted file to name.docx
I went to read the litterature of adobe.pdfservices.operation.pdfops.options.extractpdf.extract_pdf_options.ExtractPDFOptions() but didn't found ways to tune the extraction and change it from ".zip" to ".docx". What things can I try next?
Unfortunately, right now the Python SDK is only supporting the Extract portion of our PDF services. You could use the services via the REST APIs (https://documentcloud.adobe.com/document-services/index.html#how-to-get-started-) as an alternative.
My code is working correctly to scour a directory of PDFs, download weblinks embedded within those PDFs, and sequentially name them with appropriate file extension.
That being said - I am getting a few random files that download but DON'T have an extension associated with them. In doing quality checks, I have all the attachments that matter - these extra files are truly garbage.
Is there a way to not download them or build in a check in the code so that I don't end up with these phantom files?
#!/usr/bin/env python3
import os
import glob
import pdfx
import wget
import urllib.parse
import requests
## Accessing and Creating Six Digit File Code
pdf_dir = "./"
pdf_files = glob.glob("%s/*.pdf" % pdf_dir)
for file in pdf_files:
## Identify File Name and Limit to Digits
filename = os.path.basename(file)
newname = filename[0:6]
## Run PDFX to identify and download links
pdf = pdfx.PDFx(filename)
url_list = pdf.get_references_as_dict()
attachment_counter = (1)
for x in url_list["url"]:
if x[0:4] == "http":
parsed_url = urllib.parse.quote(x)
extension = os.path.splitext(x)[1]
r = requests.get(x)
with open('temporary', 'wb') as f:
f.write(r.content)
##Concatenate File Name Once Downloaded
os.rename('./temporary', str(newname) + '_attach' + str(attachment_counter) + str(extension))
##Increase Attachment Count
attachment_counter += 1
for x in url_list["pdf"]:
parsed_url = urllib.parse.quote(x)
extension = os.path.splitext(x)[1]
r = requests.get(x)
with open('temporary', 'wb') as f:
f.write(r.content)
##Concatenate File Name Once Downloaded
os.rename('./temporary', str(newname) + '_attach' + str(attachment_counter) + str(extension))
##Increase Attachment Count
attachment_counter += 1
It's not clear which part of your code produces these "phantom" files, but anyplace you want to avoid downloading a file which doesn't have an extension, you can make the download conditional. If the component after the last slash doesn't contain a dot, do nothing.
if '.' in x.split('/')[-1]:
... dowload(x) etc
I am new at python.
I want to download through a code data from this URL: "ftp://cddis.nasa.gov/gnss/products/ionex/". However the files that I want have this format: "codgxxxx.xxx.Z".
All these files are inside each year(enter image description here) as it is show here:enter image description here.
How can I download it just those files using python?.
Until now I have been using wget with this code: wget ftp://cddis.nasa.gov/gnss/products/ionex/2008/246/codg0246.07i.Z", for each one of files but is to tedious.
Can anyone help me please!!.
Thank you
Since you know the structure in the FTP server, this can be pretty easy to accomplish without having to use ftplib.
It would be cleaner to actually retrieve a directory listing from the server such as this question
(I don't seem to be able to connect to that nasa URL though)
I would recommend reading here for more on how to actually perform an FTP download.
But something like this may work. (full disclosure: I haven't tested it)
import urllib
YEARS_TO_DOWNLOAD = 12
BASE_URL = "ftp://cddis.nasa.gov/gnss/products/ionex/"
FILE_PATTERN = "codg{}.{}.Z"
SAVE_DIR = "/home/your_name/nasa_ftp/"
year = 2006
three_digit_number = 0
for i in range(0, YEARS_TO_DOWNLOAD):
target = FILE_PATTERN.format(str(year + i), str(three_digit_number.zfill(3))
try:
urllib.urlretrieve(BASE_URL + target, SAVE_DIR + target)
except urllib.error as e:
print("An error occurred trying to download {}.\nReason: {}".format(target,
str(e))
else:
print("{} -> {}".format(target, SAVE_DIR + target))
print("Download finished!")
I believe this is my first StackOverflow question, so please be nice.
I am OCRing a repository of PDFs (~1GB in total) ranging from 50-200 pages each and found that suddenly all of the available 100GB of remaining harddrive space on my Macbook Pro were gone. Based on a previous post, it seems that ImageMagick is the culprit as shown here.
I found that these files are called 'magick-*' and are stored in /private/var/tmp. For only 23 PDFs it had created 3576 files totaling 181GB.
How can I delete these files immediately within the code after they are no longer needed? Thank you in advance for any suggestions to remedy this issue.
Here is the code:
import io, os
import json
import unicodedata
from PIL import Image as PI
import pyocr
import pyocr.builders
from wand.image import Image
from tqdm import tqdm
# Where you want to save the PDFs
destination_folder = 'contract_data/Contracts_Backlog/'
pdfs = [unicodedata.normalize('NFKC',f.decode('utf8')) for f in os.listdir(destination_folder) if f.lower().endswith('.pdf')]
txt_files = [unicodedata.normalize('NFKC',f.decode('utf8')) for f in os.listdir(destination_folder) if f.lower().endswith('.txt')]
### Perform OCR on PDFs
def ocr_pdf_to_text(filename):
tool = pyocr.get_available_tools()[0]
lang = 'spa'
req_image = []
final_text = []
image_pdf = Image(filename=filename, resolution=300)
image_jpeg = image_pdf.convert('jpeg')
for img in image_jpeg.sequence:
img_page = Image(image=img)
req_image.append(img_page.make_blob('jpeg'))
for img in req_image:
txt = tool.image_to_string(
PI.open(io.BytesIO(img)),
lang=lang,
builder=pyocr.builders.TextBuilder()
)
final_text.append(txt)
return final_text
for filename in tqdm(pdfs):
txt_file = filename[:-3] +'txt'
txt_filename = destination_folder + txt_file
if not txt_file in txt_files:
print 'Converting ' + filename
try:
ocr_txt = ocr_pdf_to_text(destination_folder + filename)
with open(txt_filename,'w') as f:
for i in range(len(ocr_txt)):
f.write(json.dumps({i:ocr_txt[i].encode('utf8')}))
f.write('\n')
f.close()
except:
print "Could not OCR " + filename
A hacky way of dealing with this was to add an os.remove() statement within the main loop to remove the tmp files after creation.
tempdir = '/private/var/tmp/'
files = os.listdir(tempdir)
for file in files:
if "magick" in file:
os.remove(os.path.join(tempdir,file))
Image should be used as a context manager, because Wand determine timings to dispose resources including temporary files, in-memory buffers, and so on. with block help Wand to know boundaries when these Image objects are still needed and when they are now unnecessary.
See also the official docs.