Starting TRAC server with multiple independant projects - python

I'm running a TRAC server (tracd service) with 3 independant projects configured. Each project has an own password file in order to keep the user management independant. TRAC is started as a Windows service as described on https://trac.edgewall.org/wiki/0.11/TracStandalone
It seems that starting the TRAC server does not work if the string length of the key 'AppParameters' in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\tracd\Parameters is too long. The maximum key lenght seems to be around 260 characters.
The TRAC server can be started successfully using following 'AppParameters' key:
C:\Python27\Scripts\tracd-script.py -p 80 --auth=',C:\Trac\Moisture\conf\.htpasswd,mt.com' --auth=',C:\Trac\Balances\conf\.htpasswd,mt.com' --auth=',C:\Trac\Weights\conf\.htpasswd,mt.com' C:\Trac\Moisture C:\Trac\Balances C:\Trac\Weights
The TRAC server does not start with following 'AppParameters' key:
C:\Python27\Scripts\tracd-script.py -p 80 --auth='Moisture,C:\Trac\Moisture\conf\.htpasswd,mt.com' --auth='Balances,C:\Trac\Balances\conf\.htpasswd,mt.com' --auth='Weights,C:\Trac\Weights\conf\.htpasswd,mt.com' C:\Trac\Moisture C:\Trac\Balances C:\Trac\Weights
If I add a fourth project it is not possible to start the TRAC server anymore because the string is too long. Is this problem known? Is there a workaround?

You can also shorten your command by using the -e option for specifying the Trac environment parent directory rather than explicitly listing each Environment path.
A more extensive solution:
You could run the service with nssm.
Install nssm and put it on your path. I installed using chocolatey package manager: choco install -y nssm.
Create a batch file, run_tracd.bat:
C:\Python27-x86\Scripts\tracd.exe -p 8080 env1
Run nssm install tracd:
Run nssm start tracd
You don't have to do it exactly like this. You could avoid the bat file and enter the parameters in the nssm GUI. I'm not Windows expert, but I like having the bat file because it's easier to edit. However, there may be security concerns that I'm unaware of or it may be more robust to put the parameters in the nssm GUI (you don't have to worry about accidental deletion of the bat file). The following also works for me:

Related

Installing packages in a Kubernetes Pod

I am experimenting with running jenkins on kubernetes cluster. I have achieved running jenkins on the cluster using helm chart. However, I'm unable to run any test cases since my code base requires python, mongodb
In my JenkinsFile, I have tried the following
1.
withPythonEnv('python3.9') {
pysh 'pip3 install pytest'
}
stage('Test') {
sh 'python --version'
}
But it says java.io.IOException: error=2, No such file or directory.
It is not feasible to always run the python install command and have it hardcoded into the JenkinsFile. After some research I found out that I have to declare kube to install python while the pod is being provisioned but there seems to be no PreStart hook/lifecycle for the pod, there is only PostStart and PreStop.
I'm not sure how to install python and mongodb use it as a template for kube pods.
This is the default YAML file that I used for the helm chart - jenkins-values.yaml
Also I'm not sure if I need to use helm.
You should create a new container image with the packages installed. In this case, the Dockerfile could look something like this:
FROM jenkins/jenkins
RUN apt install -y appname
Then build the container, push it to a container registry, and replace the "Image: jenkins/jenkins" in your helm chart with the name of the container image you built plus the container registry you uploaded it to. With this, your applications are installed on your container every time it runs.
The second way, which works but isn't perfect, is to run environment commands, with something like what is described here:
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
the issue with this method is that some deployments already use the startup commands, and by redefining the entrypoint, you can stop the starting command of the container from ever running, thus causing the container to fail.
(This should work if added to the helm chart in the deployment section, as they should share roughly the same format)
Otherwise, there's a really improper way of installing programs in a running pod - use kubectl exec -it deployment.apps/jenkins -- bash then run your installation commands in the pod itself.
That being said, it's a poor idea to do this because if the pod restarts, it will revert back to the original image without the required applications installed. If you build a new container image, your apps will remain installed each time the pod restarts. This should basically never be used, unless it is a temporary pod as a testing environment.

Mounting a virtual environment via SSHFS on local machine using it's Python3 file not working

So I have mounted a part of a development server which hold a virtual environment that is used for development testing. The reason for this is to get access to the installed packages such as Django-rest-framework and Django itself and not having it set up locally (to be sure to use the same version as the development server has). I know that it's perhaps better to use Docker for this, but that's not the case right now.
The way I've done it is installing SSHFS via an external brew (as it's no longer supported in the brew core) - via this link https://github.com/gromgit/homebrew-fuse
After that I've run this command in the terminal to via SSH mount the specific part of the development server that holds the virtual enviornment:
sshfs -o ssh_command='ssh -i /Users/myusername/.ssh/id_rsa' myusername#servername:/home/myusername/projectname/env/bin ~/mnt/projectname
It works fine and I have it mounted on my local disk in mnt/projectname.
Now I go into VSCode and go into the folder and select the file called "python3" as my interpreter (which I should, right?). However, this file is just an alias, being 16 bytes in size. I suspect something is wrong here, but I'm not sure on how to fix it. Can someone maybe take a look and give some input? I'll attach a screenshot of the mounted directory.
Screenshot of virtualenv directory mounted on local machine
The solution to the problem was using the VSCode extension Remote - SSH and run VSCode directly in the remote location, and from there being able to access the virtual environment.

Get vim on a remote server without system administration permissions

I have been given a profile (with /home directory) on a remote Linux server to work on projects that need powerful computing resources. I'd like to use Vim to edit code (mostly python) on the remote server as it can be run through a shell and doesn't require a slow GUI exchange. Currently, the Debian distribution on the remote server has a barebones vi installed and no Vim. Is there a way to install a Vim (perhaps in my home directory?) without superuser permissions?
You should be able to install vim locally, for example downloaded from a binary, or compiled from source with
git clone https://github.com/vim/vim.git
cd vim/src
make
From there, you can simply add the directory you compiled it to to PATH.

pycharm always "uploading pycharm helpers" to same remote python interpreter when starts

When I start PyCharm for remote python interpreter, it always performs "Uploading PyCharm helpers", even when the remote machine IP is the same and already containing previously uploaded helpers. Is the behaviour correct?
This is a well known problem that can be a major obstacle in productivity especially if you use disposable instances in your workflow. It leads to a forced coffee break of 20 minutes every time you want to connect to a remote system. Unacceptable.
Seems like PyCharm creates a build.txt file in the remote helper folder that just has the current PyCharm build number as its contents, for instance:
PY-171.4694.38
So it's possible to upload the helpers manually by using rsync on /Applications/PyCharm.app/Contents/helpers/ and finally manually creating a build.txt file with your current build number. After that, PyCharm should not attempt to re-upload them.
Example:
$ rsync -avz /Applications/PyCharm.app/Contents/helpers/ cluster:/home/xapple/.pycharm_helpers/
$ echo "PY-171.4694.38" > /home/xapple/.pycharm_helpers/build.txt
$ python /home/xapple/.pycharm_helpers/pydev/setup_cython.py build_ext --inplace
In my case, several projects are projected to the remote server by Pycharm. All of them get stuck when one of the projects goes wrong on the remote server. Solution: leave only one that you need to work on and restart the PyCharm by "Invalidate caches".
Note that -- at least as late as version 2018.3.x -- PyCharm also appears to require re-uploading of the helpers when the local network connection changes as well, for some reason.
What I've observed in my case is that if, while PyCharm remains running, I relocate my laptop and connect to a different LAN, the next remote debugging session I initiate will trigger the lengthy helper upload. It turns out that the contents of the helpers directory actually uploaded in this case are exactly identical to the contents already present in that directory on the remote system (I compared them), so this upload is entirely superfluous, but PyCharm isn't able to detect this.
As there's no way I know of in PyCharm to bypass or cancel automatic helpers upload, the only recourse is to completely exit from PyCharm (close all open project windows) after each change of network connection and restart the IDE. In my experience, this will cause the helper upload to succeed in the "checking remote helpers" phase, before actually uploading all the helpers again. Of course, this is major annoyance if you have multiple projects open, but it's faster than waiting the (tens of) minutes for the agonizingly slow helpers upload to complete.
All of what other responders describe for the course of action to take when changing PyCharm versions is true. It is sufficient to use rsync, ftp, scp, or whatever to transfer the contents of the new local helpers directory (on Linux, a subdirectory of where the app is installed) to the remote system (on Linux, ~/.pycharm_helpers, where ~ is the home directory of the user name used for the remote debugging session), and update the remote build.txt in the helpers directory with the new PyCharm version.
This problem came back again 6 years later with PyCharm 2022.3.2.
The directory /Applications/PyCharm.app/Contents/helpers/ doesn't exist anymore, so the previous trick doesn't work.
What solved it this time is simply to:
Quit PyCharm.
Delete the ~/.pycharm_helpers directory on the remote server.
Relaunch PyCharm and let it do it's thing.
According to the docs,
PyCharm checks remote helpers version on every remote run, so if you update your PyCharm version, the new helpers will be uploaded automatically and you don't need to recreate remote interpreter.
fast (less than 3 second between me an digitalocean) solution inspired by excellent xApple's answer
on remote server:
export SOURCE=<your ip>
export PORT=9000
export HELPERS=$HOME/.pycharm_helpers
# PyCharm Help -> About
export BUILD=PY-172.4343.24 # 2017/10/11
cd $HELPERS
rm -fr *
# my OS - ubuntu, change firewall rules to yours if you're not so lucky
sudo ufw allow from $SOURCE proto tcp to any port $PORT
netcat -l -v -p $PORT | tar xz # here you waiting for connection
# after finish
sudo ufw delete allow from $SOURCE proto tcp to any port $PORT
echo -n $BUILD > build.txt
python $HELPERS/pydev/setup_cython.py build_ext --inplace
on your workstation:
export TARGET=<remote server ip>
export PORT=9000
export HELPERS=<path to helpers> # for me it's $HOME/opt/pycharm-2016.3/helpers
cd $HELPERS
tar cfz - . | netcat -v $TARGET $PORT
Turning off the firewall addressed the problem in my case (macOS - Mojave). Note that this is not a general solution as it was not tested in any other environments/OS.

How do I deploy a python application to an external server?

I have written a python script on my local laptop which uses several third party packages. I now want to run my script regularly (via a cron job) on an external server.
The external server most likely does not have all the dependencies installed, is there is a way to package and deploy my python script and dependencies in order to ensure that it will run?
I have already tried to package the script as an exe, but failed to do so.
Not clear what kind of third party packages you have, but for those that were installed with pip, you can do this in your dev environment:
$ pip freeze > requirements.txt
And then you can install these packages in your production environment:
$ pip install requirements.txt
Ideally, you will already have a virtualenv on your production box. If not, it may be well worth reading about these before deploying your script.
Just turn your computer into a server. Simply set up your router for port forwarding so that your server's content's will display when the router's IP is entered. You can of course purchase a DNS domain to give that IP a human readable URL.

Categories