I'm working on a script that is going to be executed every time a new pull-request is created in github. These pull-requests contain a stand-alone python program, including a requirements.txt file containing any extra packages to be installed with pip that are needed to run said program. For each of these packages, a version needs to be selected, not left empty, e.g.
pytest # not allowed
pytest==1.2.3 # allowed
I've been thinking that one way of verifying this is to use the regex from PEP0440: https://peps.python.org/pep-0440/#public-version-identifiers and ensure that every line that does not start with "#" ends with said regex, but there should be an easier way of doing this, instead of going through the file line by line.
Related
Even if i risk some negative votes, i want to ask you about a strategy to do some automaticaaly installations of libraries and packages. I tested and worked great the strategy of finding all libraries from pip freeze and attaching them into a file.txt . After that, i used pip install -r file.txt. So far,so good. But what cand you do when you want to gradually add libraries and don't want to write manually in the file.txt,but simply have a code that reads the new library,eventually use the subprocess and installs it automatically? The purpose behind this question is to make the code to work at its fullest only when compiling the program (one single human action,when you run the code, it reads the new libraries and installs it automatically,without to write them in the file.txt) . Any ideas are apreciated,thank you!:)
It seems like you already have all of the parts in place. Why not just write a bash script that runs every day to call pip freeze, put it in the txt file, and update everything?
Then run the script as often as you want with crontab
I have installed some command line clients using pip that should run straight from the command line without the python keyword and using the path to the file.
For instance shub from scrapinghub or turbolift
All I get is:
shub: command not found
and
turbolift: command not found
What environment variables should I add to .bash_profile to enable the desired command line behaviour?
The directory which contains the script you want to run must be added to your PATH.
For example, to add $HOME/bin/pip/, you would use
PATH=$HOME/bin/pip:$PATH
to add it at the front. (If it's not frequently used, and doesn't need to override system commands, maybe add to the end instead.)
Many guidelines add export PATH but this is normally unnecessary, as the system startup files will already have declared this particular variable to be exported.
This is an extremely basic and very frequently asked question.
I'm a bit confused about how to implement git hooks correctly, and I cannot figure out how to access any type of information that I need from within my script. I have very little experience scripting/using Python.
I simply want to access the filenames of the files (and later also the contents of the file) about to be committed in a pre-commit hook, so that I can check if they match a naming convention. I've seen posts such as this one Git server hook: get contents of files being pushed?, where the poster mentions how he got a list of the files by calling git diff --cached --name-status --diff-filter=AM.
I'm sorry if this is a stupid question, but how do I call this line from within my script and set it equal to something? I recognize that line as a Git command but I'm confused how that translates into coding it. What does it look like in python?
Here's all I currently have for a template for my pre-commit. It simply does a test print and its in Python.
#!/usr/bin/env python
import sys
print("\nError details\n")
git diff-index --name-status HEAD | grep '^[MA]'
That's the most reliable way I know. It prints out the names with an M or A prefix, followed by some whitespace, followed by the name, to indicate whether or not the file was "modified" or "added."
There is some extra magic, though. I would recommend:
git stash --keep-index
git diff-index --name-status HEAD | grep '^[MA]'
git reset --hard
git stash pop --quiet --index
This will give you the list of names in your staging area (by stashing any changes since your last git add command) and restore your workspace back immediately afterward. Since your staging area, and not your workspace, is what you're about to commit, this is probably what you want.
I have a program that does all this at https://github.com/elfsternberg/pre-commit-stash
It's written in Hy, a dialect of Python that most people barely know about or can even read. Hy does come with a hy2py transpiler, though, so if you really need it, this script will show you how it's done.
With Fileconveyor limited documentation I'm confused as to where it installs after I've run the pip command as follows on their website Fileconveyor.org.
Bottom line: Anyone have luck installing Fileconveyor on Debian 6 for integration with Drupal 6 and the CND Module?
I can't figure out where to put my settings.xml file.
Thanks,
Curtis
The documentation does give indication of where things are put, but it isn't entirely clear in that we expect an "installation" to move things to certain destinations, such as /usr/bin. In reality, Fileconveyor is installed in the very same directory as wherever the git clone placed it.
The settings file (which must be cp from a file named "config.sample.xml") is in a folder 'conveyor' within the main 'conveyor' folder.
The link where you can read about this is https://github.com/wimleers/fileconveyor
It reads in part: "The sample configuration file (config.sample.xml) should be self explanatory. Copy this file to config.xml, which is the file File Conveyor will look for,
and edit it to suit your needs."
Starting it doesn't actually invoke any command with the name 'fileconveyor', which I previously mentioned is what one might expect from a typical installation. Another instruction from the above link reads:
"Starting File Conveyor
File Conveyor must be started by starting its arbitrator (which links
everything together; it controls the file system monitor, the processor
chains, the transporters and so on). You can start the arbitrator like this:
python /path/to/fileconveyor/arbitrator.py"
In my case the command is 'python ~/src/conveyor/conveyor/arbitrator.py'
In retrospect I might reinstall in another directory in case I ever empty my ~/src folder which is the folder I use to initially download items to compile and install, then clean. I wasn't expecting it to end up being the installation folder for Fileconveyor.
Hope this helps.
I want to remove a incorrectly installed program and reinstall it. I can remove the program with subprocess.Popen calling the msiexe on it and install new program the same way BUT ONLY with two independent scripts. But i also need to remove some folders in C:\Programs files and also in C:\Doc& Settings. How can i traverse through the directory structure and remove the folders?Also how can i continue the script after restart the PC from the next line to install the new program.
In a nutshell, here's what you'll need to do.
You can delete the files and folders by using the remove() and rmdir() or removedirs() methods in the os module (assuming your user/program has administrative rights).
To restart your script you will first need to add some command line argument handling to it that allows it to be told whether to start from the beginning or continue from the other point.
To get the script to run after restart, you'll need to set a value in the Windows registry. I believe they're stored under the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnceand HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\RunOncekeys. There you can add a string value (type REG_SZ) which contains a command line to invoke your script and pass it the appropriate command line argument(s) which will tell it to continue and re-install the program.