I have a program that sends some data to GCS in a while loop, which requires a JSON credential file, it roughly looks like the following:
import os
import sys
from pathlib import Path
GOOGLE_SERVICE_ACCOUNT_FP = "pos-service-account.json"
IS_INSTALLER = getattr(sys, "frozen", False) and hasattr(sys, "_MEIPASS")
if IS_INSTALLER:
GOOGLE_SERVICE_ACCOUNT_FP = os.path.join(sys._MEIPASS, GOOGLE_SERVICE_ACCOUNT_FP)
def send_data(data, credential_fp: Path = Path(GOOGLE_SERVICE_ACCOUNT_FP)):
if not credential_fp.is_file():
msg = f"Google service account key json file: {str(credential_fp)!r} is not found!\nPWD files:\n{list(Path.cwd().glob('*'))}"
raise FileNotFoundError(msg)
# some function post data to cloud storage
post_to_gcs(data)
def main():
while True:
data = ...
send_data(data)
if __name__ == '__main__':
main()
I packaged using one file with the following command:
pyinstaller --onefile -w .\main.py --add-data 'pos-service-account.json;.'
Then when I clicked on the .exe file on windows, it was working fine, I could see the data posted by this program. However, I came back after a few days, I got the file not found error:
Google service account key json file: 'C:\\Users\\POS\\AppData\\Local\\Temp\\_MEI30522\\pos-service-account.json' is not found!
this does not make sense to me as the program was working at the beginning, which means it did find the json file, I'm still trying to replicate the error, but so far my suspicion is:
Does the sys._MEIPASS directory ever change? e.g. if the computer goes to sleep and comes back on (I'm going to test it), I could see how my script would fail if the _MEIPASS changes when the program is running.
Yes the MEIPASS folder changes it's name often, so hardcoding that file location is a bad idea. Instead save the file relative to the __file__ path variable so it will always be referring to the currently running programs location.
import os
import sys
from pathlib import Path
GOOGLE_SERVICE_ACCOUNT_FP = "pos-service-account.json"
CREDENTIAL = os.path.join(os.path.dirname(os.path.abspath(__file__)), GOOGLE_SERVICES_ACCOUNT_FP)
def send_data(data, credential_fp=CREDENTIAL):
if not credential_fp.is_file():
msg = f"Google service account key json file: {str(credential_fp)!r} is not found!\nPWD files:\n{list(Path.cwd().glob('*'))}"
raise FileNotFoundError(msg)
# some function post data to cloud storage
post_to_gcs(data)
def main():
while True:
data = ...
send_data(data)
if __name__ == '__main__':
main()
Related
I have made a config file named "config.cfg" that's on the same folder of my .py file.
My config file is like this:
[NetAccess]
host=localhost
port=3306
[Credentials]
username=myuser
password=mypass
[Database]
name=mydb
in my .py file I have this code:
import configparser
config = configparser.ConfigParser()
config.read('config.cfg')
__DBMSuser = config.get('Credentials', 'username')
__DBMSpsw = config.get('Credentials', 'password')
When I launch my program, I receive this error:
configparser.NoSectionError: No section: 'Credentials'
Can someone help me?
I've solved it. My code was correct, and the .cfg file was correctly saved in the folder of my program, but because of other parts of my code, my current directory changed to "C:/Windows/Service32". Not reading the file, I had not error until I was trying to read the sections, so I got NoSectionError.
To solve it, I've choice a standard folder (in AppData) where to save my file and read it and then I've used the absolute path.
Your code is working for me. Most likely the issue is reading the config file itself. Config Parser's read method is configured to fail silently if it fails to find or read the file, but the read function returns a read_ok boolean flag. Use it to check if the read was successful:
import configparser
config = configparser.ConfigParser()
filename = 'config.cfg'
read_ok = config.read(filename)
if read_ok:
__DBMSuser = config['Credentials']['username']
__DBMSpsw = config['Credentials']['password']
else:
print(f'Could not read file {filename}')
There is no mistake in your code, cuz it works for me.
I think there is some small error with file:
Make sure your file is in same directory as python file
Have you saved your file? maybe you forgot to press ctrl+s
If even that's not working for you, try another version of Python
I am attempting to make a program update itself to the newest version that I have made. E.g. I added a new functionality to it. It would be useful for me to be able to upload the updated file to a central location like my Raspberry Pi and have the program update itself across all of my computers without updating each one individually.
I have made the bellow code, but it does not work. It can recognize when the file is up-to-date but running the new program it downloads fails, it successfully downloads and deletes itself, but the new program is not run, with no error messages being shown.
Update test.py:
#updaterV1.py
import time
import requests
import os
import hashlib
time.sleep(5)
cwd = os.getcwd()
URL = r"http://[rasberry pi's ip]/update%20files/dev/hash.txt"
hash_path = os.path.join(cwd,"remote hash.txt")
with open (hash_path, "wb") as f:
f.write(requests.get(URL).content)
with open(hash_path,"r") as hash_file:
remotehash = (hash_file.readline()).strip()
os.remove(hash_path)
hasher = hashlib.sha256()
with open(__file__, 'rb') as self_file:
selfunhashed = self_file.read()
hasher.update(selfunhashed)
selfhash = hasher.hexdigest()
print(selfhash)
print(remotehash)
if (selfhash == remotehash):
print("program is up to date")
input()
else:
update_path = os.path.join(cwd,"temp name update.py")
URL = r"http://[rasberry pi's ip]/update%20files/dev/update.py"
with open (update_path, "wb") as f:
f.write(requests.get(URL).content)
with open(update_path,"r") as f:
name = f.readline().strip()
name = name[1:] #use the 1st line as "#name.py" not "# name"
update_path = os.path.join(cwd,name)
try:
os.remove(update_path)
except:
pass
os.rename(os.path.join(cwd,"temp name update.py"),update_path)
os.system("python \""+update_path+"\"")
print("removing self file now")
os.remove(__file__)
It uses a separate TXT file with the hash of the program stored in the same folder to check the remote files hash without downloading the actual file to hash it locally.
I want to export the wallpapers directly to my nextcloud except that it does not work knowing that I am handling an object in python
import easywebdav
from wallhaven.api import Wallhaven
#Settings Webdav
webdav = easywebdav.connect('192.168.1.40', username='Uri3l', password='mypassword', protocol='http', port=80, verify_ssl=False)
Mydir = webdav.cd("Wallhaven")
results = wallhaven.search()
for wallpaper in results.data:
wallpaper.save(os.path.dirname(Mydir))
download()
I get this error like os.PathLike object when I run the code
File "/home/dark/Dev/Bot/WallHaven-Bot/Bot_with_api.py", line 39, in download
wallpaper.save(os.path.dirname(Mydir))
File "/usr/lib/python3.9/posixpath.py", line 152, in dirname
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
~/Dev ᐅ
Be careful, you are using two incompatible interfaces. Even if the cd function would return a path (by definition returns nothing which is the same as None) the wallhaven module would download it somewhere to your local computer instead of webdav. The easiest way to find this out is to look for proper method documentation like Wallhaven.save
You can achieve what you want, by using your computer as temporary storage for both modules.
import os
import easywebdav
from wallhaven.api import Wallhaven
import tempfile
# Settings Webdav
webdav = easywebdav.connect('192.168.1.40', username='Uri3l', password='mypassword', protocol='http', port=80,
verify_ssl=False)
webdav.cd("Wallhaven")
results = wallhaven.search()
with tempfile.TemporaryDirectory() as td:
for wallpaper in results.data:
filename = f"wallhaven-{wallpaper.id}{wallpaper.extension}" # from the Wallpapers save function
wallpaper.save(td)
webdav.upload(os.path.join(td, filename), filename) # should upload to webdav cwd ?? cant test it
I am trying to provide the client side the option of downloading some files in Flask. There can be multiple files or a single file available for the user/client to download.
However I am not able to understand how to provide the user the option to download multiple files.
Here is what I have tried so far:
#app.route('/download_files')
def download():
count=0
download_list=[]
for path in pathlib.Path("dir1/dir2").iterdir():
if path.is_file():
for i in names:
if pathlib.PurePosixPath(path).stem == i:
count += 1
download_list.append(path)
return send_file(download_list, as_attachment=True, mimetype="text/plain", download_name="Downloaded Files", attachment_filename="Generated Files")
This does not work properly even with a single file. The file type I am trying to download is text file with the extension .sql .
Will I somehow have to zip multiple files and then provide the download option? Please guide with my available options.
In order to offer several files together as a download, you only have the option of compressing them in an archive.
In my example, all files that match the specified pattern are listed and compressed in a zip archive. This is written to the memory and sent by the server.
from flask import Flask
from flask import send_file
from glob import glob
from io import BytesIO
from zipfile import ZipFile
import os
app = Flask(__name__)
#app.route('/download')
def download():
target = 'dir1/dir2'
stream = BytesIO()
with ZipFile(stream, 'w') as zf:
for file in glob(os.path.join(target, '*.sql')):
zf.write(file, os.path.basename(file))
stream.seek(0)
return send_file(
stream,
as_attachment=True,
download_name='archive.zip'
)
You haven't provided a code sample where you actually getting these files or this file. Minimum working example would be like this:
from flask import Flask, request
app = Flask(__name__)
#app.route('/download_files', methods=['POST'])
def download():
file = request.files['file'] # for one file
files = request.files.getlist("file[]") # if there're multiple files provided
if __name__ == "__main__":
app.run()
After what your file variable will be an object of werkzeug.FileStorage and files variable will be a list of these objects.
And to download all these files you can check this question.
I have created a python program, see below. It moves files to AWS S3. It it is expecting input. It works well with pycharm but when I call the python script from command prompt, it doesn't take the input value. Here is my code:
import os
import sys
import boto3
from botocore.client import Config
import configparser
import re
import os.path
## Initialize the Parameters
def initconfig(input):
config = configparser.ConfigParser()
config.read_file(open( 'CONFIG_AIRBILLING.conf'))
print('Code Name is :'+ input)
global REMOTE_DIR,ACCESS_KEY_ID,ACCESS_SECRET_KEY,BUCKET_NAME,TARGET_DIR,FILENAME,SRC_DIR,data,File
ACCESS_KEY_ID = config.get('ACCESS', 'ACCESS_KEY_ID')
print('ACCESS_ID_IS:'+ ACCESS_KEY_ID)
ACCESS_SECRET_KEY = config.get('ACCESS', 'ACCESS_SECRET_KEY')
BUCKET_NAME = config.get('ACCESS', 'BUCKET_NAME')
SRC_DIR = config.get(input, 'SRC_DIR')
FILENAME = config.get(input, 'FILENAME')
#LOC="C:\test\demo.txt"
TARGET_DIR = config.get(input, 'TARGET_DIR')
File='demo.txt'
#data = open(File, 'rb') ## This is the filename, need to change it
## This function will make sure file exist in Source directory
def readstatus():
try:
with open(File,'r') as f:
f.closed
result='True'
movefiles(result)
except (Exception,FileNotFoundError) as e:
print('***Error:File Not Found or Accessible***')
result='False*'
raise e
## This function will move the files to AWS S3 bucket
def movefiles(result):
if result=='True':
s3 = boto3.resource(
's3',
aws_access_key_id=ACCESS_KEY_ID,
aws_secret_access_key=ACCESS_SECRET_KEY,
config=Config(signature_version='s3v4')
)
s3.Bucket(BUCKET_NAME).put_object(Key=TARGET_DIR + '/' + File, Body=File)
print('***File Moved***')
print("Done")
initconfig("ABC")
readstatus()
The code above runs fine with pycharm because I can change the value of initconfig function value. But when I run this through command prompt, it doesn't take the parameter values I pass. Here is how I am passing the value, Please help me to fix this.
From Command Prompt
python move_files_to_S3 "TEST"
You want sys.argv, which is a list with all parameters passed through the command line (but notice sys.argv[0] is the name of the script itself).
You'd also do good to check for __name__ == '__main__' to distinguish between when your code is called from the python interpreter through the command line (as in your example at the end of your post) and when it is imported from another module:
if __name__ == '__main__':
initconfig(sys.argv[1])
readstatus()