commit(upload) confluence attachments to svn using python-atlassian-api - python

I'm trying to upload files to svn using atlassiain-python-api(python).
So far, I managed to download and store confluence attachments to a default directory for python project
following this example.
Now I'd like to upload those files to my svn server.
My ideas are
changing download path to svn or
directly integrating svn with confluence or
using atlassian fisheye
It would be very nice of you if you can walk me through it or give me some clues.
I've been searching for useful svn python modules but none are working.
I don't think they are used and supported at all.
Thank you!

Related

Add library/module to server

I am pretty new to python and would like to use the PyMuPDF library on a web server in order to modify PDFs. The problem is, I am unable to add/install any modules or libraries to/on the server.
Is there a way to install all libraries and modules in a directory, then upload this directory to the server and link the python file (also on server) to these uploaded folders?

Automatically updating local CSV on GitHub repo by Python Script

I have a project that is running a python script in a GitHub repo that reads and updates a local CSV file. Is it possible to save the updated CSV file on the GitHub repo using that py script? Calling and saving the file to my GoogleDrive won't work because it is a public GitHub repo, so I cannot store my Google credentials.
Does anyone have a solution? Or know a cloud provider that will read/write access to non-users accounts?
You can check out PyGithub, from their docs:
PyGitHub is a Python library to access the GitHub REST API. This
library enables you to manage GitHub resources such as repositories,
user profiles, and organizations in your Python applications.
Or GitPython
GitPython is a python library used to interact with git repositories,
high-level like git-porcelain, or low-level like git-plumbing.
Cheers!

Hosting build documentation from a pull request on GutHub

I work on an open source project.
In order to facilitate the review of sphinx documentation changes in our Python source code, we’d love if we could somehow get the documentation generated and hosted somewhere for each pull request, like we already do with coveralls for our code coverage.
Pushing new commits would update the generated doc for that pull request. We’ll be soon be adding doc generation to our travis build to find sphinx errors, but doing a final visual review would still need to be done locally by pulling the branch and generating it locally.
Is there any GitHub app that offers to host a webpage generated on a pull request?
2018: You can make your own GitHub repository serve pages with GitHub pages.
If you generate your documentation in the gh-pages branch or the doc/ subfolder of your main branch, you can have both your codebase, and your doc in the repo.
See for instance lneuhaus/pyrpl issue 85, which illustrates how to automate Sphinx for Python documentation
cd doc/sphinx
sphinx-apidoc -f -o source/ ../../pyrpl/
make html
You can automate that using Syntaf/travis-sphinx:
A standalone script for automated building and deploying of sphinx docs via travis-ci
Update Q4 2022: the OP fronsacqc adds in the comments:
We ended up pushing the documentation generated by the build on an S3 folder tagged with the static hosting flag, so now every build's documentation is hosted on S3 for a couple of days.

How to download files from a repository using python

I am now practicing automation using selenium using python language.
I now want to to download the an archive file (database scripts) from bitbucket repository to my local machine.
Is there any way to do it?
Can you help me in giving some sample code to do the same?
Note: I'm using python 3.4. (I have a private repository)

Writing a script to download everything on a server

I want to download all the files that are publicly accessible on this site:
https://www.duo.uio.no/
This is the site for the university of Oslo, and here we can find every paper/thesis that is publicly available from the archives of the university. I tried a crawler, but the website has set some mechanism for stopping crawlers accessing their documents. Are there any other ways of doing this?
Did not mention this in the original question, but what I want is all the pdf files on the server. I tried SiteSucker, but that seems to just download the site itself.
wget --recursive --no-clobber --page-requisites --html-extension --convert-links --restrict-file-names=unix,ascii --domains your-site.com --no-parent http://your-site.com
try it
You could try using site sucker (download), that allows you to download the contents of a website, ignoring any rules they may have in place.

Categories