ConfigParser overwriting the content of config file - python

I am having a problem when writing to a config file. I have two Python scripts that read and write to the same file. the problem is when I write to it from one script it overwrites the content from the other script.
Here is my code:
authfile = "Users/.ahs" # .ahs is a hidden file
config = ConfigParser.ConfigParser()
tmpfile = open(authfile, "w+")
config.add_section(s)
config.set(s, k, t)
config.write(tmpfile)
tmpfile.close()

w+ truncates the file when it opens, are you sure you didn't mean a or a+?
See Confused by python file mode "w+"

Related

How can I use an utf-8 file name retrieved from a YAML file for os.path.isdir()?

I am loading a folder path from a config.yml file. Example name: C:/Users/Name/Desktop/ü which contains an utf-8 character. When I load this path using yaml.load(config) (I am using ruamel.yaml) and then use the loaded value to check if this directory exists with os.path.isdir() I always get back "False", even though the file exists. (on Windows)
However, when I try to check if the file exists with a hardcoded string like root_path = 'C:/Users/Name/Desktop/ü' I get "True".
I dumped the data (a python dict) to the config file using yaml.dump():
with open(path_to_config, 'w', encoding='utf-8') as config:
yaml.dump(data, config)
which looks like this when opening in a text editor:
destination:
root_path: C:/Users/Name/Desktop/ü
Printing the hardcoded value to the console shows:
C:/Users/Name/Desktop/▒
or when using print(root_path.encode('utf-8')):
b'C:/Users/Name/Desktop/\xc3\xbc.
To retrieve the root_path from the config file I use:
with open('config.yaml') as cfg:
user_data = yaml.load(cfg)
root_path = user_data['destination']['root_path']
When I print the root_path retrieved from the config.yml file instead I get:
C:/Users/Name/Desktop/ü
and using print(root_path.encode('utf-8')):
b'C:/Users/Name/Desktop/\xc3\x83\xc2\xbc'
Where does this difference come from and how can I convert the value loaded from the config file so that os.path.isdir() can find the file?
In most examples you'll see reading a YAML file from disc is done using:
yaml = ruamel.yaml.YAML()
with open('config.yaml') as fp:
yaml.load(fp)
That open, is an open for reading (same as doing open("config.yaml", "r")). That is fine on Linux, or on Windows when using ASCII/text files. But in order for the YAML parser to properly handle non-ASCII input on Windows, you should open the file in read-binary mode:
yaml = ruamel.yaml.YAML()
with open('config.yaml', 'rb') as fp:
yaml.load(fp)

Paramiko Download, process and re-upload the same file

I am using Paramiko to create an SFTP client to create a backup copy of a JSON file, read in the contents of the original, then update (the original). I am able to get this snippet of code to work:
# open sftp connection stuff
# read in json create backup copy - but have to 'open' twice
read_file = sftp_client.open(file_path)
settings = json.load(read_file)
read_file = sftp_client.open(file_path)
sftp_client.putfo(read_file, backup_path)
# json stuff and updating
new_settings = json.dumps(settings, indent=4, sort_keys = True)
# update remote json file
with sftp_client.open(file_path, 'w') as f:
f.write(new_settings)
However when I try to clean up the code and combine the backup file creation and JSON load:
with sftp_client.open(file_path) as f:
sftp_client.putfo(f, backup_path)
settings = json.load(f)
The backup file will be created but json.load will fail to due not having any content. And if I reverse the order, the json.load will read in the values, but the backup copy will be empty.
I'm using Python 2.7 on a Windows machine, creating a remote connection to a QNX (Linux) machine. Appreciate any help.
Thanks in advance.
If you want to read the file second time, you have to seek file read pointer back to the file beginning:
with sftp_client.open(file_path) as f:
sftp_client.putfo(f, backup_path)
f.seek(0, 0)
settings = json.load(f)
Though that is functionally equivalent to your original code with two open's.
If you aim was to optimize the code, to avoid downloading the file twice, you will have to read/cache the file to memory and then upload and load the contents from the cache.
f = BytesIO()
sftp_client.getfo(file_path, f)
f.seek(0, 0)
sftp_client.putfo(f, backup_path)
f.seek(0, 0)
settings = json.load(f)

python - file was loaded in the wrong encoding utf-8

im quite new to programing and i don´t understand this error message i get, file was loaded in the wrong encoding utf-8 or it´s not really a error message in the code but i get it in my new .txt file where i write all found keywords to. The .txt file get upp to 4000+ rows with information that i sort to Excel in another program and later send it to Access. What dose the message mean and is thhere a way to fix it? Thanks
im using pycharm with anaconda36
import glob
def LogFile(filename, tester):
data = []
with open(filename) as filesearch: # open search file
filesearch = filesearch.readlines() # read file
file = filename[37:]
for line in filesearch:
if tester in line: # extract "Create Time"
short = line[30:]
data.append(short) # store all found wors in array
print (file)
with open('Msg.txt', 'a') as handler: # create .txt file
for i in range(len(data)):
handler.write(f"{file}|{data[i]}")
# open with 'w' to "reset" the file.
with open('LogFile.txt', 'w') as file_handler:
pass
# ---------------------------------------------------------------------------------
for filename in glob.glob(r'C:\Users\Documents\Access\\GTX797\*.log'):
LogFile(filename, 'Sending Request: Tester')
I just had the same error in pyCharm and fixed it by specifying UTF-8 when creating the file. You will need to import codecs to do this.
import codecs
with codecs.open(‘name.txt', 'a', 'utf-8-sig') as f:

Python: Use Dropbox API - Save .ODT File

I'm using Dropbox API with Python. I don't have problems with Dropbox API, I make all the authentification steps without problems.
When I use this code:
pdf_dropbox = client.get_file('/Example.pdf')
new_file = open('/home/test.pdf','w')
new_file.write(pdf_dropbox.read())
I generate a file in the path /home/test.pdf, it's a PDF file and the content is displayed same as original.
But when I try same code with an .odt file, it fails generating the new file:
odt_dropbox = client.get_file('/Example.odt')
new_file = open('/home/test_odt.odt','w')
new_file.write(odt_dropbox.read())
This new file test_odt.odt has errors and I can't see it's content.
# With this instruction I have the content of the odt file inside odt_dropbox
odt_dropbox = client.get_file('/Example.odt')
Wich is the best way to save the content of an odt file ?
Is there a better way to write LibreOffice files ?
I'd appreciate any helpfull information,
Thanks
Solved, I forgot 2 things:
Open the file for binary writing wb instead of w
new_file = open('/home/test_odt.odt','wb')
Close the file after creation: new_file.close() to make the flush
Full Code:
odt_dropbox = client.get_file('/Example.odt')
new_file = open('/home/test_odt.odt','wb')
new_file.write(odt_dropbox.read())
new_file.close()

How to use tempfile.NamedTemporaryFile() in Python

I want to use tempfile.NamedTemporaryFile() to write some contents into it and then open that file. I have written following code:
tf = tempfile.NamedTemporaryFile()
tfName = tf.name
tf.seek(0)
tf.write(contents)
tf.flush()
but I am unable to open this file and see its contents in Notepad or similar application. Is there any way to achieve this? Why can't I do something like:
os.system('start notepad.exe ' + tfName)
at the end.
I don't want to save the file permanently on my system. I just want the contents to be opened as a text in Notepad or similar application and delete the file when I close that application.
This could be one of two reasons:
Firstly, by default the temporary file is deleted as soon as it is closed. To fix this use:
tf = tempfile.NamedTemporaryFile(delete=False)
and then delete the file manually once you've finished viewing it in the other application.
Alternatively, it could be that because the file is still open in Python Windows won't let you open it using another application.
Edit: to answer some questions from the comments:
As of the docs from 2 when using delete=False the file can be removed by using:
tf.close()
os.unlink(tf.name)
You can also use it with a context manager so that the file will be closed/deleted when it goes out of scope. It will also be cleaned up if the code in the context manager raises.
import tempfile
with tempfile.NamedTemporaryFile() as temp:
temp.write('Some data')
temp.flush()
# do something interesting with temp before it is destroyed
Here is a useful context manager for this.
(In my opinion, this functionality should be part of the Python standard library.)
# python2 or python3
import contextlib
import os
#contextlib.contextmanager
def temporary_filename(suffix=None):
"""Context that introduces a temporary file.
Creates a temporary file, yields its name, and upon context exit, deletes it.
(In contrast, tempfile.NamedTemporaryFile() provides a 'file' object and
deletes the file as soon as that file object is closed, so the temporary file
cannot be safely re-opened by another library or process.)
Args:
suffix: desired filename extension (e.g. '.mp4').
Yields:
The name of the temporary file.
"""
import tempfile
try:
f = tempfile.NamedTemporaryFile(suffix=suffix, delete=False)
tmp_name = f.name
f.close()
yield tmp_name
finally:
os.unlink(tmp_name)
# Example:
with temporary_filename() as filename:
os.system('echo Hello >' + filename)
assert 6 <= os.path.getsize(filename) <= 8 # depending on text EOL
assert not os.path.exists(filename)

Categories