I have an INI file I need to modify using Python. I was looking into the ConfigParser module but am still having trouble. My code goes like this:
config= ConfigParser.RawConfigParser()
config.read('C:\itb\itb\Webcams\AMCap1\amcap.ini')
config.set('Video','Path','C:\itb\itb')
But when looking at the amcap.ini file after running this code, it remains unmodified. Can anyone tell me what I am doing wrong?
ConfigParser does not automatically write back to the file on disk. Use the .write() method for that; it takes an open file object as it's argument.
config= ConfigParser.RawConfigParser()
config.read(r'C:\itb\itb\Webcams\AMCap1\amcap.ini')
config.set('Video','Path',r'C:\itb\itb')
with open(r'C:\itb\itb\Webcams\AMCap1\amcap.ini', 'wb') as configfile:
config.write(configfile)
You could use python-benedict, it's a dict subclass that provides normalized I/O support for most common formats, including ini.
from benedict import benedict
# path can be a ini string, a filepath or a remote url
path = 'path/to/config.ini'
d = benedict.from_ini(path)
# do stuff with your dict
# ...
# write it back to disk
d.to_ini(filepath=path)
It's well tested and documented, check the README to see all the features:
https://github.com/fabiocaccamo/python-benedict
Install using pip: pip install python-benedict
Note: I am the author of this project
Related
Is there a way to unzip a .usdz file in python? I was looking at the shutil.unpack_archive, but it looks like I can't use that without an existing function to unpack it. They use zip compression, just have a different file extension. Would just renaming them to have .zip extensions work? Is there a way to "tell" shutil that these are basically .zip files, something else I can use?
Running the Linux unzip command can unpack them, but due to my relative unfamiliarity with shell scripting and the file manipulation I'll need to do, I'd prefer to use python.
You can do this a couple ways.
Use shutil.unpack_archive with the format="zip" argument, e.g.
import shutil
archive_path = "/path/to/archive.usdz"
shutil.unpack_archive(archive_path, format="zip")
# note you can also pass extract_dir keyword argument to
# set where the files are extracted to
You can also directly use the zipfile module:
import zipfile
archive_path = "/path/to/archive.usdz"
zf = zipfile.ZipFile(archive_path)
zf.extractall()
# note that this extracts to the working directory unless you specify the path argument
I have a pypi package called collectiondbf which connects to an API with a user entered API key. It is used in a directory to download files like so:
python -m collectiondbf [myargumentshere..]
I know this should be basic knowledge, but I'm really stuck on the question:
How can I save the keys users give me in a meaningful way so that they do not have to enter them every time?
I would like to use the following solution using a config.json file, but how would I know the location of this file if my package will be moving directories?
Here is how I would like to user it but obviously it won't work since the working directory will change
import json
if user_inputed_keys:
with open('config.json', 'w') as f:
json.dump({'api_key': api_key}, f)
Most common operating systems have the concept of an application directory that belongs to every user who has an account on the system. This directory allows said user to create and read, for example, config files and settings.
So, all you need to do is make a list of all distros that you want to support, find out where they like to put user application files, and have a big old if..elif..else chain to open the appropriate directory.
Or use appdirs, which does exactly that already:
from pathlib import Path
import json
import appdirs
CONFIG_DIR = Path(appdirs.user_config_dir(appname='collectiondbf')) # magic
CONFIG_DIR.mkdir(parents=True, exist_ok=True)
config = CONFIG_DIR / 'config.json'
if not config.exists():
with config.open('w') as f:
json.dumps(get_key_from_user(), f)
with config.open('r') as f:
keys = json.load(f) # now 'keys' can safely be imported from this module
I need to fetch a couple of files from a huge svn repo. Whole repo takes almost an hour to be fetched. Files I am looking for are part of tar bundle.
Is it possible to fetch only those two files from tar bundle without extracting the whole bundle through Python Code?
If so, can anybody let me know how should I go about it?
It sounds like you have two parts to your question:
Fetching a single tar bundle from the SVN repo, without the rest of the repo's files.
Using Python to extract two files from the retrieved bundle.
For the first part, I'll simply refer to this post on svn export and sparse checkouts.
For the second part, here is a solution for extracting the two files from the retrieved tarball:
import tarfile
files_i_want = ['path/to/file1','path/to/file2']
tar = tarfile.open("bundle.tar")
tar.extractall(members=[x for x in tar.getmembers() if x.name in files_i_want])
Here is one way to get a tar file from svn and extract one file from it all:
import tarfile
from subprocess import check_output
# Capture the tar file from subversion
tmp='/home/me/tempfile.tar'
open(tmp, 'wb').write(check_output(["svn", "cat", "svn://url/some.tar"]))
# Extract the file we want, saving to current directory
tarfile.open(tmp).extract('dir1/fname.ext', path='dir2')
where 'dir1/fname.ext' is the full path to the file that you want within the tar archive. It will be saved in 'dir2/dir1/fname.ext'. If you omit the path argument, it will be saved in 'dir1/fname.ext' under the current directory.
The above can be understood as follows. On a normal shell command line, svn cat url tells subversion to send the file defined by url to stdout (see svn help cat for more info). url can be any type of url that svn understands such as svn://..., svn+ssh://..., or file://.... We run this command under python control using the subprocess module. To do this the svn cat url command is broken up into a list: ["svn", "cat", "url"]. The output from this svn command is saved to a local file defined by the tmp variable. We then use the tarfile module to extract the file you want.
Alternatively, you could use the extractfile method to capture the file data to a python variable:
handle = t.extractfile('dir1/fname.ext')
print handle.readlines() # show file contents
According to the documentation, tarfile should accept a subprocess's stdout as a file handle. This would simplify the code and eliminate the need to save the tar file locally. However, due to a bug, Issue 10436, that will not work.
Perhaps you want something like this?
#!/usr/local/cpython-3.3/bin/python
import tarfile as tarfile_mod
def main():
tarfile = tarfile_mod.TarFile('tar-archive.tar', 'r')
if False:
file_ = tarfile.extractfile('etc/protocols')
print(file_.read())
else:
tarfile.extract('etc/protocols')
tarfile.close()
main()
I have hundreds of CSV files zipped. This is great because they take very little space but when it is time to use them, I have to make some space on my HD and unzip them before I can process. I was wondering if it is possible with python(or linux command line) to unzip a file while reading it. In other words, I would like to open a zip file, start to decompress the file and as we go, process the file.
So there would be no need for extra space on my drive. Any ideas or suggestions?
Python, since the 1.6 version, provides the module zipfile to handle this kind of circumstances. An example usage:
import csv
import zipfile
with zipfile.ZipFile('myarchive.zip') as archive:
with archive.open('the_zipped_file.csv') as fin:
reader = csv.reader(fin, ...)
for record in reader:
# process record.
note that in python3 things get a bit more complicated because the file-like object returned by archive.open yields bytes, while csv.reader wants strings. You can write a simple class that does the conversion from bytes to strings using a given encoding:
class EncodingConverter:
def __init__(self, fobj, encoding):
self._iter_fobj = iter(fobj)
self._encoding = encoding
def __iter__(self):
return self
def __next__(self):
return next(self._iter_fobj).decode(self._encoding)
and use it like:
import csv
import zipfile
with zipfile.ZipFile('myarchive.zip') as archive:
with archive.open('the_zipped_file.csv') as fin:
reader = csv.reader(EncodingConverter(fin, 'utf-8'), ...)
for record in reader:
# process record.
While it's very possible to open ZIP files in Python, it is also possible to transparently handle this operation using a filesystem extension. If this is preferable or not depends on various factors including system access and solution portability.
See Fuse-Zip:
With fuse-zip you really can work with ZIP archives as real directories. Unlike KIO or Gnome VFS, it can be used in any application without modifications.
Or AVFS: A Virtual File System:
AVFS is a system, which enables all programs to look inside gzip, tar, zip, etc. files or view remote (ftp, http, dav, etc.) files, without recompiling the programs.
Note that these solutions are system-specific and rely on FUSE. There might be similar transparent solutions for Windows - but that would require another investigation for the specific system.
The only way I came up for deleting a file from a zipfile was to create a temporary zipfile without the file to be deleted and then rename it to the original filename.
In python 2.4 the ZipInfo class had an attribute file_offset, so it was possible to create a second zip file and copy the data to other file without decompress/recompressing.
This file_offset is missing in python 2.6, so is there another option than creating another zipfile by uncompressing every file and then recompressing it again?
Is there maybe a direct way of deleting a file in the zipfile, I searched and didn't find anything.
The following snippet worked for me (deletes all *.exe files from a Zip archive):
zin = zipfile.ZipFile ('archive.zip', 'r')
zout = zipfile.ZipFile ('archve_new.zip', 'w')
for item in zin.infolist():
buffer = zin.read(item.filename)
if (item.filename[-4:] != '.exe'):
zout.writestr(item, buffer)
zout.close()
zin.close()
If you read everything into memory, you can eliminate the need for a second file. However, this snippet recompresses everything.
After closer inspection the ZipInfo.header_offset is the offset from the file start. The name is misleading, but the main Zip header is actually stored at the end of the file. My hex editor confirms this.
So the problem you'll run into is the following: You need to delete the directory entry in the main header as well or it will point to a file that doesn't exist anymore. Leaving the main header intact might work if you keep the local header of the file you're deleting as well, but I'm not sure about that. How did you do it with the old module?
Without modifying the main header I get an error "missing X bytes in zipfile" when I open it. This might help you to find out how to modify the main header.
Not very elegant but this is how I did it:
import subprocess
import zipfile
z = zipfile.ZipFile(zip_filename)
files_to_del = filter( lambda f: f.endswith('exe'), z.namelist()]
cmd=['zip', '-d', zip_filename] + files_to_del
subprocess.check_call(cmd)
# reload the modified archive
z = zipfile.ZipFile(zip_filename)
The routine delete_from_zip_file from ruamel.std.zipfile¹ allows you to delete a file based on its full path within the ZIP, or based on (re) patterns. E.g. you can delete all of the .exe files from test.zip using
from ruamel.std.zipfile import delete_from_zip_file
delete_from_zip_file('test.zip', pattern='.*.exe')
(please note the dot before the *).
This works similar to mdm's solution (including the need for recompression), but recreates the ZIP file in memory (using the class InMemZipFile()), overwriting the old file after it is fully read.
¹ Disclaimer: I am the author of that package.
Based on Elias Zamaria comment to the question.
Having read through Python-Issue #51067, I want to give update regarding it.
For today, solution already exists, though it is not approved by Python due to missing Contributor Agreement from the author.
Nevertheless, you can take the code from https://github.com/python/cpython/blob/659eb048cc9cac73c46349eb29845bc5cd630f09/Lib/zipfile.py and create a separate file from it. After that just reference it from your project instead of built-in python library: import myproject.zipfile as zipfile.
Usage:
with zipfile.ZipFile(f"archive.zip", "a") as z:
z.remove(f"firstfile.txt")
I believe it will be included in future python versions. For me it works like a charm for given use case.