I want to execute a script via the "execute shell" build section of my build's configuration
sudo python buildscript.py param1 param2
however I don't know how/where to actually upload the script. Hope this makes sense, I don't have shell access to the Jenkins server (or I would put it in the workspace directory I'm guessing) so I'm wondering if there's an upload interface that I missed somewhere.
You can create a project on your version control system to keep all your scripts. Then, just create a job on jenkins to fetch them (dont need to build anything). They will be made available on the filesystem, inside ~jenkins/workspace, and everytime you change them, Jenkins will make sure they keep up-to-date on the build machine.
For sudo, you need to have ssh access (or know who has) as #Bruno suggested, and grant jenkins user that permission.
You should just put buildscript.py in version control next to your source files -- I assume you already have an action configured in your job config to checkout your sources from version control?
Related
Following this answer you can register a python script as a windows service with NSSM.
But are there any obvious downsides on registering a django management command as windows service this way?
nssm.exe install ProjectService
nssm.exe set ProjectService Application "c:\path\to\env\Sricpts\python.exe"
nssm.exe set ProjectService AppParameters "c:\path\to\django_project\manage.py command"
The command would parse the contents of some files as they are being created.
Not any downside that I can imagine that wouldn't also exist if you wrote a script to do it. In theory you could be loading too much, for example if you don't need database access or your models in order to complete whatever is in that command, perhaps break it into a script and call that directly with python. Otherwise, enjoy the convenience and see if you can identify a downside and try to solve that directly.
If you find yourself wanting to schedule tasks that run for a variety of times you might want to try a distribute task queue pattern which you can adopt quickly using https://github.com/Koed00/django-q or something like it.
I have a windows application built with progress openedge technology.
I have created a python script to generate an excel file but I need to deploy it to the client and im afraid of requiring special permissions on the client side if I compile it to .exe and attempt to run it.
Can someone suggest me a method to be able to integrate python with my project smoothly without breaking anything?
You could compile it on your own machine then try to run it while logged in as a guest user. If a guest account can run it without complaints it will probably run fine on the client machine.
This is crude because you still haven't tested all possible client platforms (unless you're talking about one specific client), also we don't know what's inside your script.
Use icacls to set appropriate permissions of your compiled script before shipping.
I'm not sure about the special permissions thing, but is it possible for you to turn your script into a CGI program and stick it on your webserver, or wrapper it with WebSpeed? Then your app could call a web service to get the .xls file.
I work on multiple appengine projects in any given week. i.e. assume multiple clients. Earlier I could set application in app.yaml. So whenever I did appcfg.py update.... it would ensure deployment to the right project.
When deploying, the application variable throws an error with gcloud deploy. I had to use
gcloud app deploy --project [YOUR_PROJECT_ID]. So what used to be a directory level setting for a project, is now going into our build tooling. And missing out that simple detail can push a project code to the wrong customer.
i.e. if I did gcloud config set project proj1 and then somehow did a gcloud app deploy in proj2, it would deploy to proj1. Production deployments are done after detailed verification on the build tools and hence it is less of an issue there because we still use the --project flag.
But its hard to do similar stuff on the development environment. dev_appserver.py doesn't have a --project flag.
When starting dev_appserver.py I've to do gcloud config set project <project-id> before I start the server. This is important when I using stuff like PubSub or GCS (in dev topics or dev buckets).
Unfortunately, missing out a simple configuration like setting a project ID in a dev environment can result into uploading blobs/messages/etc into the wrong dev gcs bucket or wrong dev pubsub topic (not using emulators). And this has happened quite a few times especially when starting new projects.
I find the above solutions as hackish-workarounds. Is there a good way to ensure that we do not deploy or develop in a wrong project when working from a certain directory?
TL;DR - Not supported based on the current working directory, but there are workarounds.
Available workarounds
gcloud does not directly let you set up a configuration per working directory. Instead, you could use one of these 3 options to achieve something similar:
Specify --project, --region, --zone or the config of interest per command. This is painful but gets the job done.
Specify a different gcloud configuration directory per command (gcloud uses ~/.config/gcloud on *nix by default):
CLOUDSDK_CONFIG=/path/to/config/dir1 gcloud COMMAND
CLOUDSDK_CONFIG=/path/to/config/dir2 gcloud COMMAND
Create multiple configurations and switch between them as needed.
gcloud config configurations activate config-1 && gcloud COMMAND
Shell helpers
As all of the above options are ways to customize on the command line, aliases and/or functions in your favorite shell will also help make things easier.
For example in bash, option 2 can be implemented as follows:
function gcloud_proj1() {
CLOUDSDK_CONFIG=CLOUDSDK_CONFIG=/path/to/config/dir1 $#
}
function gcloud_proj2() {
CLOUDSDK_CONFIG=CLOUDSDK_CONFIG=/path/to/config/dir2 $#
}
gcloud_proj1 COMMAND
gcloud_proj2 COMMAND
There's a very nice way I've been using with PyCharm, I suspect you can do so with other IDEs.
You can declare the default env variables for the IDE Terminal, so when you open a new terminal gcloud recognises these env variables and sets the project and account.
No need to switch configurations between projects manually (gcloud config configurations activate ). Terminals open in other projects will inherit it's own GCP project and config from the ENV variables.
I've had this problem for years and I believe I found a decent compromise.
Create a simple script called contextual-gcloud. Note the \gcloud, fundamental for future aliasing.
🐧$ cat > contextual-gcloud
#!/bin/bash
if [ -d .gcloudconfig/ ]; then
echo "[$0] .gcloudconfig/ directory detected: using that dir for configs instead of default."
CLOUDSDK_CONFIG=./.gcloudconfig/ \gcloud "$#"
else
\gcloud "$#"
fi
Add to your .bashrc and reload / start new bash. This will fix autocompletion.
alias gcloud=contextual-gcloud
That's it! If you have a directory called that way the system will use that instead, which means you can load your configuration into source control etc.. only remember to git ignore stuff like logs, and private stuff (keys, certificates, ..).
Note: auto-completion is fixed by the alias ;)
Code: https://github.com/palladius/sakura/blob/master/bin/contextual-gcloud
These are exactly the reasons for which I highly dislike gcloud. Making command line argument mandatory and dropping configuration files support, much too error prone for my taste.
So far I'm still able to use the GAE SDK instead of Google Cloud SDK (see What is the relationship between Google's App Engine SDK and Cloud SDK?), which could be one option - basically keep doing stuff "the old way". Please note that it's no longer the recommended method.
You can find the still compatible GAE SDKs here.
For whenever the above will no longer be an option and I'll be forced to switch to the Cloud SDK my plan is to have version-controlled cheat-sheet text files in each app directory containing the exact cmds to use for running the devserver, deploy, etc for that particular project which I can just copy-paste into the terminal without fear of making mistakes. You carefully set these up once and then you just copy-paste them. As a bonus you can have different branch versions for different environments (staging/production, for example).
Actually I'm using this approach even for the GAE SDK - to prevent accidental deployment of the app-level config files to the wrong GAE app (such deployments must use cmdline arguments to specify the app in multi-service apps).
Or do the same but with environment config files and wrapper scripts instead of cheat-sheet files, if that's your preference.
I need to set up a Jenkins server to build a python project. But I can't find any good tutorials online to do this, most of what I see uses pip, but our project simply works on
python.exe setup.py build
I've tried running it as a Windows Batch file setting it up through the configure options in Jenkins, where I enter the above line in the box provided, but the build fails.
When I try to run this I get a build error telling me there is no cmd file or program in my project's workspace. But it seems to me that cmd would be a program inherent to Jenkins itself in this case.
What's the best way to go about setting up Jenkins to build a python project using setup.py?
Really i used jenkins to test Java EE project and i don't khnow if it will be the same principle or not ,
so i downloaded the jenkins.war from the website and i deployed it on my jboss server and I reached it via an url on the browser : http://localhost:serverport/jenkins the i created a job and i select the server & jdk & maven & the location of my project in workspace then i make a run to build the project.
I am sorry if you find that my subject is far from your demand but I tried to give you a visibility onto my use.
I relaized I did something stupid and forgot that my coworker had set it up on a UNIX server, so I should have been using the shell instead of Windows Batch. Once I changed that and installed the python plugin, I got it to build fine.
I do not have access to the admin account in Windows 7. Is there a way to install RabbitMQ and its required Erlang without admin privileges? In some portable way?
I need to use it in my Python Celery project.
Thanks!
It is possible. Here's how I've done it:
You need to create a portable Erlang and acquire RabbitMQ server files.
You can install regular Erlang to another computer, then copy the whole installation directory to the computer with limited account. You can use local documents, or AppData like C:\Users\Limited_Account\AppData\erl5.10.4
(If you don't have any access to another computer, you can extract the setup file with 7-Zip but it'll be troublesome to fix paths.)
Modify the erg.ini in the bin folder with the new path. (By default erg.ini uses Unix line endings, so it might be seen as a single line.)
[erlang]
Bindir=C:\\Users\\Limited_Account\\AppData\\erl5.10.4\\erts-5.10.4\\bin
Progname=erl
Rootdir=C:\\Users\\Limited_Account\\AppData\\erl5.10.4\\erl5.10.4
See if bin\erl.exe opens up Erlang Shell. If you see a crash dump, path might not be correct. If Visual C++ Redist. files were not installed before, it will nag you about msvcr100.dll and you need to manually copy them as well but I don't recommended that.
Download the zip version of RabbitMQ server from https://www.rabbitmq.com/install-windows-manual.html and extract it.
Set %ERLANG_HOME% variable. You can type set ERLANG_HOME="C:\\Users\\Limited_Account\\AppData\\erl5.10.4\" in command line. Alternatively, you can add this line to every .bat in the sbin folder.
Now you can use the management scripts in the sbin folder. For example, you can use rabbitmq_server-3.2.4\sbin\rabbitmq-server.bat to start the RabbitMQ Server. Obviously, starting as a service is not an option because you are not an admin.
For further information, see: https://www.rabbitmq.com/install-windows-manual.html