Include multiple RST files with Sphinx with directive .. include:: - python

In my Sphinx project, I want to have a include folder with multiple RST files that I can reuse in other projects. My source folder looks something like:
\source
\include
links.rst # Here I have useful external links
roles.rst # Here I define custom roles
subs.rst # Here I definne common substitutions (replace directive)
... rest of my stuff
conf.py
Basically, I want to be able to write a single .. include:: in my source RST files that will account for all my files, i.e. the equivalent of /include/*.rst
I have come up with a neat solution that I post below since it might be usefult to someone else. However, it would be nice to hear other alternatives, since my solution comes with a problem of infinite loop when using sphinx-autobuild.

My solution consists of modifying conf.py to include this small piece of code:
conf.py
import os
# Code to generate include.rst
files = os.listdir('include')
with open('include.rst', 'w') as file:
for rst in files:
file.write('.. include:: /include/' + rst + '\n')
This will create a new include.rst file in the root source directory, which will look as:
\source
\include
links.rst # Here I have useful external links
roles.rst # Here I define custom roles
subs.rst # Here I definne common substitutions (replace directive)
... rest of my stuff
conf.py
include.rst
With the new file include.rst looking like:
.. include:: /include/links.rst
.. include:: /include/roles.rst
.. include:: /include/subs.rst
Finally, in my source files I only need to add on top of the file the line
.. include:: include.rst
to benefit from all my custom links, roles, and substitutions (or anything else you might want there).
PROBLEM:
My solution here presents a problem. Since I use sphinx-autobuild to automatically build the html output whenever a change is detected, an infinite loop is produced, since each time the execution of conf.py is creating the file include.rst. Any ideas on how to solve this?
UPDATE:
I have found the solution to the problem mentioned above, and which was pretty obvious, actually.
Now I execute sphinx-autobuild with the --re-ignore option as:
> sphinx-autobuild source build/html --re-ignore include.rst
and the loop stops happening.
Now, this is OK if I change the children rst files (i.e. roles, links, or subs) but if the very include.rst changes (e.g. a new children rst file was added) then I need to stop and re-run sphinx-autobuild.

Related

Trying to copy multiple files in a single directory to directories with matching names

I am developing a file management project for a law office. All case files are in .pdf and reside in a single directory. Any help is greatly appreciated.
My project is as follows:
The first section of my code (1) creates a directory for each case file and (2) copies each case file into the directory with the matching name. No issues there.
The second section of my project is code uses docxtpl and openpyxl to generate settlement agreements for each case file. No issues there.
The final step is to copy the newly generated consent agreements into the appropriate directories that were created in Step 1. This is where I am having problems. The consent agreements are generating, but they are not.
Here is the code so far:
files2 = os.listdir(template_dir)
del files2 [0:2]
print(files2)
#attempt to move consent agreements to directories created in section 1
for file in files2:
if os.path.exists(dir_path):
shutil.copy(os.path.join(r'template_dir', file), os.path.join(file, dir_path))
A few things are wrong with the path arguments in the shutil.copy() function.
In the first/source path you have os.path.join(r'template_dir', file). Since template_dir is already a string variable you should not put it in quotes. By putting it in quotes you are sending the text "template_dir" to the os.path.join() function. Corrected version is os.path.join(template_dir, file).
In the second/destination path you flipped the order of the variables. The directory should come first followed by the filename. Corrected version is os.path.join(dir_path, file)

Python's shutil.make_archive() creates dot directory on Linux (when using tar or gztar)

I'm using a basic python script to create an archive with the contents of a directory "directoryX":
shutil.make_archive('NameOfArchive', format='gztar', root_dir=getcwd()+'/directoryX/')
The generated archive rather than just storing the contents of directoryX, creates a . folder in the archive (and the contents of folder directoryX are stored in this . folder).
Interestingly this only happens with .tar and tar.gz but not with .zip
Used python version -> 3.8.10
It seems that when using .tar or .tar.gz formats, the default base_dir of "./" gets accepted literally and it creates a folder titled "."
I tried using base_dir=os.currdir but got the same results...
Tried to also use python2 but got the same results.
Is this a bug with shutil.make_archive or am I doing something incorrectly?
It's a documented behavior, sort of, just a little odd. The base_dir argument to make_archive is documented to:
Be the directory we start archiving from (after chdiring to root_dir)
Default to the current directory (specifically, os.curdir)
os.curdir is actually a constant string, '.', and, matching the tar command line utility, shutil.make_archive (and tar.add which it's implemented in terms of) stores the complete path "given" (in this case, './' plus the rest of the relative path to the file). If you run tar -c -z -C directoryX -f NameOfArchive.tar.gz ., you'll end up with a tarball full of ./ prefixed files too (-C directoryX does the same thing as root_dir, and the . argument is the same as the default base_dir='.').
I don't see an easy workaround that retains the simplicity of shutil.make_archive; if you try to pass base_dir='' it dies when it tries to stat '', so that's out.
To be clear, this behavior should be fine; a tar entry named ./foo and one named foo are equivalent for most purposes. If it really bothers you, you can switch to using the tarfile module directly, e.g.:
# Imports at top of file
import os
import tarfile
# Actual code
with tarfile.open('NameOfArchive.tar.gz', 'w:gz') as tar:
for entry in os.scandir('directoryX'):
# Operates recursively on any directories, using the arcname as the base,
# so you add the whole tree just by adding all the entries in the top
# level directory. Using arcname of entry.name means it's equivalent to
# adding os.path.basename(entry.path), omitting all directory components
tar.add(entry.path, arcname=entry.name)
# The whole loop *could* be replaced with just:
# tar.add('directoryX', arcname='')
# which would add all contents recursively, but it would also put an entry
# for '/' in, which is undesirable
For a directory structure like:
directoryX/
|
\- foo
\- bar
\- subdir/
|
\- spam
\- eggs
the resulting tar's contents would be:
foo
bar
subdir/
subdir/eggs
subdir/spam
vs. the:
./foo
./bar
./subdir/
./subdir/eggs
./subdir/spam
your current code produces.
Slightly more work to code, but not that much worse; two imports and three lines of code, and with greater control over what gets added (for example, you could trivially exclude symlinks by wrapping the tar.add call in an if not entry.is_symlink(): block, or omit recursive adding of specific directories by conditionally setting recursive=False to the tar.add call for directories you don't want to include the contents of; you can even provide a filter function to the tar.add call to conditionally exclude specific entries even when deep recursion gets involved).

GitPython: retrieve changed files between commits in specific sub directory

The repo structure looks like this:
- folder_a
- folder_b
- folder_c
- ...
I am particularly interested in the files that changed in a specific commit, but only the ones in folder_a. My solution is
for filename, details in commit.stats.files.items():
if not filename.startswith('folder_a'):
continue
# ...
but it seems the performance is not quite good if there are a great number of files in the other folders. Is there any better way to skip the files I don't care about?
If I understand correctly : you want stats on modifications from a commit, only on one specific subfolder.
Using plain git :
git show [commit] --stat folder_a
will display exactly what you want.
Have a look at what : git.show('<commit identifier>', '--stat', 'folder_a'); returns in your python script.

Find a file in a tree in python-3.4 on windows

I want to search a file in a tree.
I know the filename and the root of the tree, but I want to find the file path. I use python-3.4 on windows 7.
I have:
#I have a list with C header file name => my_headers_list that look like:
for part in my_headers_list:
print(part)
#show this
common.h
extern.h
something.h
else.h
I also have a tree ( something/ is a folder):
Software/
something.c
Component1/
extern.c
extern.h
Component2/
Core/
common.h
common.c
Config/
common_m.c
common_m.h
Component3/
Managment/
Core/
Config/
etc it's an example but it's really close of my real case.
I want:
#a list like the first one, but with the full path of the files
for part in my_headers_list:
print(part)
#show this
Software/Component2/Core/common.h
Software/Component1/extern.h
Software/Component3/Core/something.h
Software/Component3/Management/else.h
I precise that it should be generic, so I can't make hard link or hard path etc.
For the moment I've try some tricky scripts with some os.listdir etc but it seems messy and don't always work.
#I try something like this
dirs = os.listdir("Software/")
for part in dirs:
#don't know how to correctly manage these informations to get what I want.
Do you guys have some way to do this ?
Keep in mind that I don't want to add any lib or plugin to my python.
Thank you,
Regards,
Question resolved in comments.
I'll use os.walk / glob.
Thank to #kevin #Lafexlos

scons : src and include dirs

can someone give a scons config file which allows the following structure
toplevel/
/src - .cc files
/include .h files
at top level I want the o and final exe.
Here is one example of Sconscript file
env=Environment(CPPPATH='/usr/include/glib-2.0/:/usr/lib/glib-2.0/include:inc',
CPPDEFINES=[],
LIBS=['glib-2.0'])
env.Program('runme', Glob('src/*.c'))
(The environment line is not really necessary for the example, but I have it to include the non standard glib header path and left it there so you can get the idea how to add extra includes and defines)
The source files are in src directory and header files in inc directory. You run scons from the base directory and the output file is also generated in the same directory.
This question: https://stackoverflow.com/questions/279860/...
gives a pretty flexible scons skeleton which should serve your needs with a few tweaks to the path variables.
env=Environment(CPPPATH='/usr/include/glib-2.0/:/usr/lib/glib-2.0/include:include',
CPPDEFINES=[],
LIBS=['glib-2.0'])
if ARGUMENTS.get('debug', 0):
env.Append(CCFLAGS = ' -g')
env.Program('template', Glob('src/*.cc'))
Worked a treat. Thanks.

Categories