I'm trying to write a Python program converting ".pdf" files to ".docx" ones, using Adobe PDF Server API (free trial).
I've found literature enabling to transform any ".pdf" file to a ".zip" file containing ".txt" files (restoring text data) and ".excel" files (returning tabular data).
import logging
import os.path
from adobe.pdfservices.operation.auth.credentials import Credentials
from adobe.pdfservices.operation.exception.exceptions import ServiceApiException, ServiceUsageException, SdkException
from adobe.pdfservices.operation.pdfops.options.extractpdf.extract_pdf_options import ExtractPDFOptions
from adobe.pdfservices.operation.pdfops.options.extractpdf.extract_element_type import ExtractElementType
from adobe.pdfservices.operation.execution_context import ExecutionContext
from adobe.pdfservices.operation.io.file_ref import FileRef
from adobe.pdfservices.operation.pdfops.extract_pdf_operation import ExtractPDFOperation
logging.basicConfig(level=os.environ.get("LOGLEVEL", "INFO"))
try:
# get base path.
base_path =os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath("C:/..link.../extractpdf/extract_txt_from_pdf.ipynb"))))
# Initial setup, create credentials instance.
credentials = Credentials.service_account_credentials_builder()\
.from_file(base_path + "\\pdfservices-api-credentials.json") \
.build()
#Create an ExecutionContext using credentials and create a new operation instance.
execution_context = ExecutionContext.create(credentials)
extract_pdf_operation = ExtractPDFOperation.create_new()
#Set operation input from a source file.
source = FileRef.create_from_local_file(base_path + "/resources/trs_pdf_file.pdf")
extract_pdf_operation.set_input(source)
# Build ExtractPDF options and set them into the operation
extract_pdf_options: ExtractPDFOptions = ExtractPDFOptions.builder() \
.with_element_to_extract(ExtractElementType.TEXT) \
.with_element_to_extract(ExtractElementType.TABLES) \
.build()
extract_pdf_operation.set_options(extract_pdf_options)
#Execute the operation.
result: FileRef = extract_pdf_operation.execute(execution_context)
# Save the result to the specified location.
result.save_as(base_path + "/output/Extract_TextTableau_From_trs_pdf_file.zip")
except (ServiceApiException, ServiceUsageException, SdkException):
logging.exception("Exception encountered while executing operation")
But I can't yet get the conversion done to a ".docx" file, event after changing the name of the extracted file to name.docx
I went to read the litterature of adobe.pdfservices.operation.pdfops.options.extractpdf.extract_pdf_options.ExtractPDFOptions() but didn't found ways to tune the extraction and change it from ".zip" to ".docx". What things can I try next?
Unfortunately, right now the Python SDK is only supporting the Extract portion of our PDF services. You could use the services via the REST APIs (https://documentcloud.adobe.com/document-services/index.html#how-to-get-started-) as an alternative.
Related
I have a .txt file that I need to upload into a Dropbox folder. On my PC it works great as it is however I need to put the code into a Google Cloud Function and as the GCP file system is read-only - this method if failing.
Can anyone recommend an alternative way of doing this that doesn't require me to save the data locally before pushing it up into Dropbox?
Here is my current working code for my local version:
import pathlib
import dropbox
api_key = 'XXXXXXXXXX'
# Build String And Save Locally To File
string = ["Item_A","Item_B","Item_C","Item_D"]
string = str(string)
with open('Item_List.txt', 'w') as f:
f.write(string)
# Define Local File Path
localfolder = pathlib.Path(".")
localpath = localfolder / 'Item_List.txt'
# Define Dropbox Target Location
targetfile = '/Data/' + 'Item_List.txt'
# Initilize Dropbox
d = dropbox.Dropbox(api_key)
# Upload File To Dropbox
with localpath.open("rb") as f:
d.files_upload(f.read(), targetfile, mode=dropbox.files.WriteMode("overwrite"))
If you need to simply use byte data, you can use the built-in bytes function to convert a string to byte data (you need to also specify encoding):
data = ["Item_A", "Item_B", "Item_C", "Item_D"]
string_data = str(data)
byte_data = bytes(string_data, encoding='utf-8')
And then later just use the byte data as the argument:
d.files_upload(byte_data, targetfile, mode=dropbox.files.WriteMode("overwrite"))
I am an absolute beginner when it comes to working with REST APIs with python. We have received a share-point URL which has multiple folders and multiples files inside those folders in the 'document' section. I have been provided an 'app_id' and a 'secret_token'.
I am trying to access the .csv file and read them as a dataframe and perform operations.
The code for operation is ready after I downloaded the .csv and did it locally but I need help in terms of how to connect share-point using python so that I don't have to download such heavy files ever again.
I know there had been multiple queries already on this over stack-overflow but none helped to get to where I want.
I did the following and I am unsure of what to do next:
import json
from office365.runtime.auth.user_credential import UserCredential
from office365.sharepoint.client_context import ClientContext
from office365.runtime.http.request_options import RequestOptions
site_url = "https://<company-name>.sharepoint.com"
ctx = ClientContext(site_url).with_credentials(UserCredential("{app_id}", "{secret_token}"))
Above for site_url, should I use the whole URL or is it fine till ####.com?
This is what I have so far, next I want to read files from respective folders and convert them into a dataframe? The files will always be in .csv format
The example hierarchy of the folders are as follows:
Documents --> Folder A, Folder B
Folder A --> a1.csv, a2.csv
Folder B --> b1.csv, b2.csv
I should be able to move to whichever folder I want and read the files based on my requirement.
Thanks for the help.
This works for me, using a Sharepoint App Identity with an associated client Id and client Secret.
First, I demonstrate authenticating and reading a specific file, then getting a list of files from a folder and reading the first one.
import pandas as pd
import json
import io
from office365.sharepoint.client_context import ClientCredential
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
#Authentication (shown for a 'modern teams site', but I think should work for a company.sharepoint.com site:
site="https://<myteams.companyname.com>/sites/<site name>/<sub-site name>"
#Read credentials from a json configuration file:
spo_conf = json.load(open(r"conf\spo.conf", "r"))
client_credentials = ClientCredential(spo_conf["RMAppID"]["clientId"],spo_conf["RMAppID"]["clientSecret"])
ctx = ClientContext(site).with_credentials(client_credentials)
#Read a specific CSV file into a dataframe:
folder_relative_url = "/sites/<site name>/<sub site>/<Library Name>/<Folder Name>"
filename = "MyFileName.csv"
response = File.open_binary(ctx, "/".join([folder_relative_url, filename]))
df = pd.read_csv(io.BytesIO(response.content))
#Get a list of file objects from a folder and read one into a DataFrame:
def getFolderContents(relativeUrl):
contents = []
library = ctx.web.get_list(relativeUrl)
all_items = library.items.filter("FSObjType eq 0").expand(["File"]).get().execute_query()
for item in all_items: # type: ListItem
cur_file = item.file
contents.append(cur_file)
return contents
fldrContents = getFolderContents('/sites/<site name>/<sub site>/<Library Name>')
response2 = File.open_binary(ctx, fldrContents[0].serverRelativeUrl)
df2 = pd.read_csv(io.BytesIO(response2.content))
Some References:
Related SO thread.
Office365 library github site.
Getting a list of contents in a doc library folder.
Additional notes following up on comments:
The site path doesn't not include the full url for the site home page (ending in .aspx) - it just ends with the name for the site (or sub-site, if relevant to your case).
You don't need to use a configuration file to store your authentication credentials for the Sharepoint application identity - you could just replace spo_conf["RMAppID"]["clientId"] with the value for the Sharepoint-generated client Id and do similarly for the client Secret. But this is a simple example of what the text of a JSON file could look like:
{
"MyAppName":{
"clientId": "my-client-id",
"clientSecret": "my-client-secret",
"title":"name_for_application"
}
}
I have a small but mysterious and unsolvable problem using python to open a password protected file in an AWS S3 bucket.
The password I have been given is definitely correct and I can download the zip to Windows and extract it to reveal the csv data I need.
However I need to code up a process to load this data into a database regularly.
The password has a pattern like this (includes mixed case letters, numbers and a single "#"):-
ABCD#Efghi12324567890
The code below works with other zip files I place in the location with the same password:-
import boto3
import pyzipper
from io import BytesIO
s3_resource = boto3.resource('s3', aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key)
zip_obj = s3_resource.Object(bucket_name=my_bucket, key=my_folder + my_zip)
buffer = BytesIO(zip_obj.get()["Body"].read())
z = pyzipper.ZipFile(buffer)
my_newfile=z.namelist()[0]
s3_resource.meta.client.upload_fileobj(
z.open(my_newfile, pwd=b"ABCD#Efghi12324567890"), #HERE IS THE OPEN COMMAND
Bucket=my_bucket,
Key=my_folder + my_newfile)
I am told the password is incorrect:-
RuntimeError: Bad password for file 'ThisIsTheFileName.csv'
I resorted to using pyzipper rather than zipfile, since zipfile didn't support the compression method of the file in question:-
That compression method is not supported
In 7-zip I can see the following for the zip file:-
Method: AES-256 Deflate
Characteristics: WzAES: Encrypt
Host OS: FAT
So to confirm:-
-The password is definitely correct (can open it manually)
-The code seems ok - it opens my zip files with the same password
What is the issue here please and how do I fix it?
You would have my sincere thanks!
Phil
With some help from a colleague and a useful article, I now have this working.
Firstly as per the compression type, I have found it necessary to use the AESZipFile() method of pyzipper (although this method also seemed to work on other compression types).
Secondly the AESZipFile() method apparently accepts a BytesIO object as well as a file path, presumably because this is what it sees when it opens the file.
Therefore the zip file can be extracted in situ without having to download it first.
This method creates the pyzipper object which you can then read by specifying the file name and the password.
The final code looks like this:-
import pyzipper
import boto3
from io import BytesIO
my_bucket = ''
my_folder = ''
my_zip = ''
my_password = b''
aws_access_key_id=''
aws_secret_access_key=''
s3 = boto3.client('s3', aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key)
s3_file = s3.get_object(Bucket=my_bucket, Key=my_folder + my_zip)
s3_iodata = BytesIO(s3_file['Body'].read())
f = pyzipper.AESZipFile(s3_iodata)
my_file = f.namelist()[0]
file_content = f.read(my_file, pwd = my_password)
response = s3.put_object(Body=file_content, Bucket=my_bucket, Key=my_folder + my_file)
Here is an article that was useful:-
https://www.linkedin.com/pulse/extract-files-from-zip-archives-in-situ-aws-s3-using-python-tom-reid
I hope this is helpful to someone,
Phil
I have Neo4j operational on Azure. I can load data using python and a series of create statements:
create (n:Person) return n
I can query successfully using python.
Using LOAD CSV requires a file in the Neo4j import directory. I've located that directory, but moving a file into it is blocked. I've also tried putting the file in an accessable directory, but then cannot figure out how to address the path in the LOAD CSV statement.
This LOAD gives an error because the file cannot get into the Neo4j import directory:
USING PERIODIC COMMIT 10000 LOAD CSV WITH HEADERS FROM 'file:///FTDNATree.csv' AS line FIELDTERMINATOR '|' merge (s:SNPNode{SNP:toString(line.Parent)})
This statement does not find the file and gives an error: EXTERNAL file not found
USING PERIODIC COMMIT 10000 LOAD CSV WITH HEADERS FROM 'file:///{my directory path/}FTDNATree.csv' AS line FIELDTERMINATOR '|' merge (s:SNPNode{SNP:toString(line.Parent)})
Even though the python and neo4j are in the same resource group, they are different VMs. The problem seems to be the interoperability between the two VM?
If you have access to neo4j.conf, then you can modify the value of dbms.directories.import to point to an accessible directory
See https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/#config_dbms.directories.import
The solution was NOT well documented in one place. But here is what evolved by trial and error and which works.
I created a storage account within the resource
Created a directory accessible from code in which the upload file was placed.
Added container, called it neo4j-import
I could then a tranfer the file to the container as a blob (i.e., *.csv file)
You then need to make the file accessible. This involves creating an sas token and attaching it to a URL pointing to the container and the file (see python code to do this below).
You can test this URL in your local browser. It should retrieve the file, which is not accessible without the sas token
This URL is used in the LOAD CSV statement and successfully loads the Neo4j database
The code for step 4; pardon indent issues upon pasting here.
from azure.storage.blob import BlobServiceClient, BlobClient,
ContainerClient, generate_account_sas, ResourceTypes, AccountSasPermissions
def UploadFileToDataStorage(FileName,
UploadFileSourceDirecory=ImportDirectory,BlobConnStr=AzureBlobConnectionString,
Container="neo4j-import"):
#uploads file as blob to data storage
#https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-python #upload-blobs-to-a-container
blob_service_client = BlobServiceClient.from_connection_string(BlobConnStr)
blob_client = blob_service_client.get_blob_client(container=Container, blob=FileName)
with open(UploadFileSourceDirecory + FileName, "rb") as data:
blob_client.upload_blob(data)
The key python code (step 5 above).
def GetBlobURLwithSAS(FileName,Container="neo4j-import"):
#https://pypi.org/project/azure-storage-blob/
#https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blobserviceclient?view=azure-python
#generates sas token for object blob so it can be consumed by another process
sas_token = generate_account_sas(
account_name="{storage account name}",
account_key="{storage acct key}",
resource_types=ResourceTypes(service=False, container=False, object=True),
permission=AccountSasPermissions(read=True),
expiry=datetime.utcnow() + timedelta(hours=1))
return "https://{storage account name}.blob.core.windows.net/" + Container + "/" + FileName + "?" + sas_token
The LOAD statement looks like this and does not use the file:/// prefix:
LOAD CSV WITH HEADERS FROM '" + {URL from above} + "' AS line FIELDTERMINATOR '|'{your cypher query for loading csv}
I hope this helps other to navigate this scenario!
What I need is a command line tool to convert excel and ods spreadsheet files to csv which I can use on a web server (Ubuntu 16.04).
I already red this: https://pypi.python.org/pypi/unotools
which works fine for the given examples.
And this: http://www.linuxjournal.com/content/convert-spreadsheets-csv-files-python-and-pyuno-part-1v2
which should do the work I want it to do, but does not in my environment.
My problem I think is in the method Calc.store_to_url:
Line throwing exception
component.store_to_url(url,'FilterName','Text - txt - csv (StarCalc)')
I really would appreciate a hint.
Exception
unotools.unohelper.ErrorCodeIOException: SfxBaseModel::impl_store failed: 0x81a
Full source
import sys
from os.path import basename, join as pathjoin, splitext
from unotools import Socket, connect
from unotools.component.calc import Calc
from unotools.unohelper import convert_path_to_url
from unotools import parse_argument
def get_component(args, context):
_, ext = splitext(args.file_)
url = convert_path_to_url(args.file_)
component = Calc(context, url)
return component
def convert_csv(args, context):
component = get_component(args, context)
url = 'out/result.csv'
component.store_to_url(url,'FilterName','Text - txt - csv (StarCalc)')
component.close(True)
args = parse_argument(sys.argv[1:])
context = connect(Socket(args.host, args.port), option=args.option)
convert_csv(args, context)
The URL must be in file:// format.
url = convert_path_to_url('out/result.csv')
See the store_to_url example at https://pypi.python.org/pypi/unotools.
EDIT:
To use the absolute path, choose one of these; there is no need to combine them.
url = 'file:///home/me/out/result.csv'
url = convert_path_to_url('/home/me/out/result.csv')
To use the relative path, first verify that the working directory is '/home/me' by calling os.getcwd().