Reading Json file within a folder on Mac vs. Windows - python

Currently, I am trying to write an application that can be run on mac and windows. We have a folder with PATH = "[folder1]\configurations\globals.json". The following function works for windows:
def grab_api_credentials(resource: str) -> dict:
"""
:param resource: database, fmp, fred, polygon
"""
with open(PATH, 'r') as file:
data = json.load(file)
if resource is None:
return data
return data[resource]
How would you change the PATH variable to accommodate a mac?
To be sure, I have looked on many resources online, yet none of the showed how to read a json file within a folder. I greatly appreciate your help!

Related

How to use python to create url shortcut on mac?

There are answers to the window version. but not mac.
I have searched google but no appropriate result
let say I want to create a shortcut to google.com on desktop, then we could run the python function
createShortcut("www.google.com","~/Desktop")
what could be the function body?
You can create a macos .url file inside the function. It is formatted as follows:
[InternetShortcut]
URL=http://www.yourweb.com/
IconIndex=0
Here is a sample implementation:
import os
def createShortcut(url, destination):
# get home directory if ~ in destination
if '~' in destination:
destination = destination.replace('~', os.path.expanduser("~"))
# macos .url file format
text = '[InternetShortcut]\nURL=https://{}\nIconIndex=0'.format(url)
# write .url file to destination
with open(destination + 'my_shortcut.url', 'w') as fw:
fw.write(text)
return
createShortcut("www.google.com", '~/Desktop/')

Is possible to save a temporaly file in a Azure Function Linux Consuption Plan in Python?

first of all sorry for my English. I have an Azure Function Linux Consuption Plan using Python and I need to generate an html, transform to pdf using wkhtmltopdf and send it by email.
#generate temporally pdf
config = pdfkit.configuration(wkhtmltopdf="binary/wkhtmltopdf")
pdfkit.from_string(pdf_content, 'report.pdf',configuration=config, options={})
#read pdf and transform to Bytes
with open('report.pdf', 'rb') as f:
data = f.read()
#encode bytes
encoded = base64.b64encode(data).decode()
#Send Email
EmailSendData.sendEmail(html_content,encoded,spanish_month)
Code is running ok in my local development but when I deploy the function and execute the code I am getting an error saying:
Result: Failure Exception: OSError: wkhtmltopdf reported an error: Loading pages (1/6) [> ] 0% [======> ] 10% [==============================> ] 50% [============================================================] 100% QPainter::begin(): Returned false Error: Unable to write to destination
I think that error is reported because for any reason write permission is not available. Can you help me to solve this problem?
Thanks in advance.
The tempfile.gettempdir() method returns a temporary folder, which on Linux is /tmp. Your application can use this directory to store temporary files generated and used by your functions during execution.
So use /tmp/report.pdf as the file directory to save temporary file.
with open('/tmp/report.pdf', 'rb') as f:
data = f.read()
For more details, you could refer to this article.
Final correct code:
config = pdfkit.configuration(wkhtmltopdf="binary/wkhtmltopdf")
local_path = os.path.join(tempfile.gettempdir(), 'report.pdf')
logger.info(tempfile.gettempdir())
pdfkit.from_string(pdf_content, local_path,configuration=config, options={})

How can I download zip to python directory from google storage after obtaing response object?

After running the following code successfully, I think I am close to get access to the zip file in gcloud storage. However, I really cannot figure out what to do next, download or something to make the zip file available for python environment as a programmable object.
from gs import GSClient
client = GSClient()
object_meta = client.get("b/rcmikejupyter/o/output1.zip")
with client.get("b/rcmikejupyter/o/output1.zip", params=dict(alt="media"), stream=True) as res:
object_bytes = res.raw.read()
Assuming this is a byesobject
with open("pathto/yourfile.zip", "wb") as file:
file.write(object_bytes)

How to get absolute path of the file selected as input file in python?

I want the absolute path of the file selected as input file (from file browser in the form) using the python code below:
for attr, document in request.files.iteritems():
orig_filename = document.filename
print os.path.abspath(orig_filename)
mhash = get_hash_for_doc(orig_filename)
This prints the path of current working directory along(where the python script is executing) with the 'orig_filename' appended to it, which is the wrong path. I am using python 2.7, flask 0.12 under linux OS.
The requirement is to find the hash value of the file before uploading it to the server to check deduplication. So I need to use the algorithm by passing the file selected for hashing to another function as:
def get_hash_for_doc(orig_filename):
mhash = None
hash = sha1()#md5()
with open(mfile, "rb") as f:
for chunk in iter(lambda: f.read(128 * hash.block_size), b""):
hash.update(chunk)
mhash = hash.hexdigest()
return mhash
In this function I want to read file from absolute path of the orig_filename before uploading. Avoided all other code checks here.
First you need to create a temp file to simulate this required file then make your process on it
import tempfile, os
try:
fd, tmp = tempfile.mkstemp()
with os.fdopen(fd, 'w') as out:
out.write(file.read())
mhash = get_hash_for_doc(tmp)
finally:
os.unlink(tmp)
If you want to find a folder/file.ext, for an input file, simply use 'os.path.abspath' like:
savefile = os.path.abspath(Myinputfile)
when "Myinputfile" is a variable that contains the relative path and file name. For instance, derived from an argument define by the user.
But if you prefer to have absolute address of the folder, without file name try this:
saveloc = os.path.dirname(os.path.realpath(Myinputfile))
You can use pathlib to find the absolute path of the selected file.

Creating an on-the-fly zip file from string content for AWS Lambda in Python

I have a Python script that creates a Lambda script in AWS along with all the policies and triggers. I use python boto3 library for that. I create the zip file for the lambda as on-the-fly rather than uploading a static zip file from the hard drive. I use this simple code from below to create my zip file. It creates the zip file without any problems and my python code uploads this zip file as a lambda script and I can view my lambda script in the AWS without any problems. But when I run my lambda script it gives me the module not found error even though I can clearly see that both the module name and the file name does exist and is view-able.
Unable to import module 'xxxx': No module named xxxx
In the file system I double click that zip file that was created by this code and see that the content is created and everything looks normal.
If I bypass zipping on the fly and create the zip statically using WinZip and let the rest of the Python & boto3 script upload this file then it works just fine.
def CreateLambdaZip(self, fileName, fileContent):
with zipfile.ZipFile('Lambda/' + fileName + '.zip', 'w') as myzipc:
myzipc.writestr( fileName + '.py', fileContent)
myzipc.close()
It kinda looks like for the zip file I'm skipping some special headers that is needed by Aws Lambda. Is there such thing? Because in the file system the zip file that is created by Python code and the other one that is created by WinZip are exactly the same. So I know there's nothing wrong with the lambda script.
Update: I'm uploading the zip file using the below code that reads the zip file which was created using the above snippet.
with open('Lambda/'+ fileName +'.zip', 'rb') as zipFile:
func = boto3.client("Lambda").create_function(
FunctionName=lambdaFunction,
Runtime='python2.7',
Role=role['Role']['Arn'],
Handler= fileName + "." + functionName,
Description=description,
Timeout=10,
MemorySize=256,
Publish=True,
Code={'ZipFile': zipFile.read()},
)
When I use zipFile.read() I get 2 different headers for the same content when I zip it using WinZip and when I zip it using Python's module.
Zip file that's created programmatically using Python
b'PK\x03\x04\x14\x00\x00\x00\x00\x00\xe4~\x01IO\x96J=Z\x07\x00\x00Z\x07\x00\x00\x19\x00\x00\x00schedule-ec2-snapshots.pyimport json\nimport boto3\nimport time\nfrom datetime import date, timedelta\n\nprint(\'Loading scheduled EC2 backup actions\')\n\ndef create_snapshots(event, context):\n """\n Lambda function that executes daily snapshots for the instances that
and zipfile created by WinZip
b'PK\x03\x04\x14\x00\x02\x00\x08\x004X\xfcH\x88\x1f\xce\xb5&\x03\x00\x00b\x07\x00\x00\x19\x00\x00\x00schedule-ec2-snapshots.py\x8dU]k\xdb#\x10|7\xf4?,\nA\x12qL\xda\x06B\r~I\x93Bh\x9b\x87&\xf4E\x15\xe1\xac[\xdb\xd7HwBw2\t\xc1\xff\xbd{+\xeb\xcb.\xb4\n\xc4\xba\xdb\xd1\xec\xce\xdc\xae\xa4\x8a\xd2T\x0e~[\xa3\'\xaa\xb9_\x1ag>\xb6\x0b\xa7\n\x9c\xac*S\x80\x14\x0e\xfd\n\xf6\x11\xbf\x9er\\b\xee\xc4dRVJ\xbb(\xfcf\x84Tz\r6\xdb\xa0\xacs\x94p\xfb\xf9\x03,E\xf6\\\x97
With the info above I was able to start the in-memory solution. The deployment of that zip file worked but I could not use the resulting function. Got error:
Unable to import module '<function-name>': No module named <function-name>
I got it to work by specifying the file permissions.
I then use the in-mem-zip to create an AWS lambda function.
Setup:
file_map is a dictionary of full_path->file_bytes.
files is a list of full_paths
def create_lambda_function(function_name, desc, role, handler, file_map, files)
zip_contents = create_in_mem_zip_archive(file_map, files)
result = lambda_code.create_function(
FunctionName=function_name,
Runtime="python2.7",
Description=desc,
Role=role,
Handler=handler,
Code={'ZipFile': zip_contents},
)
return result
def create_in_mem_zip_archive(file_map, files):
buf = io.BytesIO()
logger.info("Building zip file: " + str(files))
with zipfile.ZipFile(buf, 'w', zipfile.ZIP_DEFLATED) as zfh:
for file_name in files:
file_blob = file_map.get(file_name)
if file_blob is None:
logger.error("Missing file {} from files".format(file_name))
continue
try:
info = zipfile.ZipInfo(file_name)
info.date_time = time.localtime()
info.compress_type = zipfile.ZIP_DEFLATED
info.external_attr = 0777 << 16L # give full access
# info.external_attr = 0644 << 16L # -r-wr--r--
# info.external_attr = 0755 << 16L # -rwxr-xr-x
zfh.writestr(info, file_blob)
except Exception as ex:
logger.info("Error reading file: " + file_name + ", error: " + ex.message)
buf.seek(0)
return buf.read()
I have experienced the exactly same problem you have. My solution is do NOT use on the fly zip file. Create a real zip file and add real file into it, and it just works. You can do that even in the lambda environment, by create filepath like "/tmp/yourfile.txt" you can create temp real file when lambda execute.

Categories