what is the equivalent of `shell=True` in `paramiko`? [duplicate] - python

This question already has answers here:
Environment variable differences when using Paramiko
(2 answers)
Closed 1 year ago.
I'm using paramiko's SSHClient to control a server via exec_command method. However, I'm failing to use python over there because it says python not found and I realized the entire conda enviroment is not there. On my local machine, I get around it with shell=True passed to subprocess. Question: how do I do this on paramiko, or, is there another command that I should run to get conda to be loaded up in server's shell.
EDIT: the exec_command offers environment optional keyword but I don't know how to leverage it if it is useful at all.

Thanks to insightful comments by #CharesDuffy with "don't feed me teach me how to fish" style, I solved the problem as follows:
For conda/miniconda to be loaded, I prefaced my commands with source ~/miniconda3/bin/activate;.
Enriching points:
For a permanent solution, the preface can be added to .profile or .bash_profile or .bash_login which is what is loaded up by default when ssh'ing. If you have a fresh installation e.g. a virtual machine, those files don't exist in the first place.
sourcing that conda file is paramount (as opposed to executing it)
Ostensibly, shell=True kwarg of subprocess module is doing something like this behind the scenes.

Related

How can I access functions inside the anaconda3/bin directory when running a bash script with subprocess.call?

I have the following problem: I wrote a bash script for data analysis that works perfectly fine when I run it from the terminal. To further automate the process I wanted to use a python script that runs the bash script (using subprocess.call), changes the working directory, and reruns the script (and so on). This also worked fine when I did it on my MacBook. However, I need to do the analysis on a Linux machine and here the problem occurred. Again, running the script from the terminal worked fine but once I tried doing this with my python script it fails to find the relevant functions for the analysis. The functions are stored inside the anaconda3/bin folder.
(Python does not even find other functions like "pip")
Of course, I could add the path to all the functions in the bash script but this seems very inefficient to me. So my question is: is there any better way of telling python where to look for the functions? And can you maybe explain to me why running the script from the terminal works but not when I use subprocess.call?
Here is the python script:
import subprocess
import os
path_list = ["Path1",
"Path2"
]
for path in path_list:
os.chdir(path)
subprocess.call("Users/.../bash_script", shell=True)
I'm just posting my series of comments as an answer since I think this at least constitutes a reasonable answer for anyone running into a similar issue (your question could definitely be common enough to index from search engine results).
Issue:
...running the script from the terminal worked fine but once I tried doing this with my python script it fails to find the relevant functions for the analysis
In general, you can troubleshoot this kind of problem with:
import subprocess
subprocess.call('echo $PATH', shell=True)
If the directory that contains the relevant binaries/scripts/etc. is not in the output, then you are facing a PATH issue in the shell created by subprocess.call.
The exact problem as confirmed by the OP in comments is that anaconda3/bin is not part of your PATH. Your script works in a regular terminal session because of the Anaconda initialization function that gets added to your .bashrc when installing.
Part of an answer that is very helpful here: Python - Activate conda env through shell script
The problem with your script, though, lies in the fact that the .bashrc is not sourced by the subshell that runs shell scripts (see this answer for more info). This means that even though your non-login interactive shell sees the conda commands, your non-interactive script subshells won't - no matter how many times you call conda init.
Solution 1: Manually use the Anaconda sourcing function in your script
As the OP mentioned in the comments, their workaround was to use the initialization function added to their .bashrc in the script they are trying to run. Although this perhaps feels like not a great solution, this is a "good enough" workaround. Unfortunately I don't use Anaconda on Linux so I don't have an exact snippet of what this looks like. See the next section for a possibly "cleaner" solution.
Solution 2: Use bash -i to run your script
As mentioned in the same answer linked above, you might be able to use:
bash -i Users/.../bash_script
This will tell bash to run in interactive mode, which then properly sources your .bashrc file when creating the shell. As a result, Anaconda and related functions should work properly.
Solution 3: Manually add anaconda3/bin to PATH
You can check out this answer to decide if this is something you want to do. Keep in mind they are speaking about a Windows OS but most of the same applies to Linux.
When you add the directory to your PATH, you are specifically telling your system to always look in that directory for commands when executing by name, e.g. ping or which. This can have unexpected behavior if you have conflicts (e.g. a command is found with the same name in /usr/bin and .../anaconda3/bin), and as such Anaconda does not add its bin folder to your PATH by default.
This is not necessarily "dangerous" per se, it's just not an ideal solution for most people. However, you are the boss of your own system. If you decide this works for your particular workflow, you can just add the export to your script:
export PATH="path/to/anaconda3/bin:$PATH"
This will set the PATH for use in the current shell and sub-processes.
Solution 4: Manually source the conda script (possibly outdated)
As mentioned in this answer, you can also opt to manually source the conda.sh script (keep in mind your conda.sh might be in another directory):
source /opt/anaconda/etc/profile.d/conda.sh
This will essentially run that shell script and add the included functionality to the current shell (e.g the one spawned by subprocess.call).
Keep in mind this answer is quite a bit older (~2013) and may not apply anymore, depending how much conda has changed over the years.
Notes
As I mentioned in the comments, you may want to post some related questions on https://unix.stackexchange.com/. You have an interesting configuration challenge that may be better suited for answers specifically pertaining to Linux, since your issue is sourcing directly from Linux shell behavior.

os.getenv('CORENLP_HOME') returning None and os.environ['CORENLP_HOME'] returning KeyError but echo $CORENLP_HOME returns a path in terminal

I am aware a lot of similar questions exist but I am unable to understand what is happening here. I am trying to follow instructions for this Stanford CoreNLP python wrapper here, one of the steps is to set theCORENLP_HOME environment variable.
I ran the command:
export CORENLP_HOME=/path/to/stanford-corenlp-full-2018-10-05
Restarted the terminal, or actually added to my ~/.bash_profile. Now when I do a echo $CORENLP_HOME in the terminal I am able to see the path correctly. But on the other hand, if the corenlp wrapper code tries to find the same path through python code it returns None.
So I separately checked two python commands, the wrapper code uses os.getenv():
import os
print(os.getenv('CORENLP_HOME')) #prints None
print(os.environ['CORENLP_HOME']) #Throws a KeyError exception
MacOS version: 10.15.4;Python: 3.7.6
I don't have a very deep understanding of environment variables in general, I want to understand what is happening here, or if I am missing something simple. Happy to provide more information!
Environment variables are not global in the UNIX process model. Each process is provided a set of environment variables by the parent process that starts it. That is typically a copy of the parent's environment variables. If you are not starting pycharm from the shell that ran the export command pycharm won't see that shell's environment variables.
The behavior you describe means you are not starting pycharm from the shell that did the export CORENLP_HOME=/path/to/stanford-corenlp-full-2018-10-05.
P.S., The UNIX process model also means that a process cannot modify the environment variables of a different process.

Run Python-Script anywhere on Windows [duplicate]

This question already has answers here:
How to run a Python script portably without specifying its full path
(4 answers)
Closed 3 years ago.
I am looking for a way to run a python script from anywhere.
I have seen this solution but as it is based on Linux I am looking for a way on Windows (10).
Basically what I want to do is, execute a script with a given parameter. Because it could be disturbing for the user, I am using a .pyw-file extension to hide the console.
Things that came to my mind:
a PATH-Variable
PowerShell-Script
Batch-file
Sadly I am not familiar/experienced with neither of those, so I can't really tell if these ideas even provide a way to do that.
Any answer is appreciated.
Edit: I would like to make the command as short as possible for the client so it is not necessary to write a novel to exec one simple task.
Another Edit:
I want the user/client to open its cmd/ps and use a command which executes my python-script, which is not in this directory he is, when opening the cmd. So the script is somewhere on his computer, but shall be executable by command from anywhere.
try to create a "code.bat" file with this command "python script.py" inside your .bat file then add the "code.bat" to your path in your environments anytime you want to run the script just type code in your shell.

Downloading python 3 on windows

I am currently trying to figure out how to set up using python 3 on my machine (Windows 10 pro 64-bit), but I keep getting stuck.
I used the Python 3.6 downloader to install python, but whenever I try to use Command Prompt it keeps saying "'python' is not recognized as an internal or external command, operable program or batch file" as if I have not yet installed it.
Unlike answers to previous questions, I have already added ";C:\Python36" to my Path environment variable, so what am I doing wrong?
I am relatively new to python, but know how to use it on my Mac, so please let me know if I'm just fundamentally confused about something.
In environmental variables under path, add your python path... you said you already so please ensure is their comma separation between previous path..
And once added save environment variables tab. And close all command prompt then open it.
Then only command prompt will refresh with your python config..
Main thing, if you enter python which mean python 2.
For python3 type, python3 then it should work
Why are you using command prompt? I just use the python shell that comes with IDLE. It’s much simpler.
If you have to use command prompt for some reason, you’re problem is probably that you need to type in python3. Plain python is what you use for using Python 2 in the command prompt.
Thanks everyone, I ended up uninstalling and then re-downloading python, and selecting the button that says "add to environment variables." Previously, I typed the addition to Path myself, so I thought it might make a difference if I included it in the installation process instead. Then, I completely restarted my computer rather than just Command Prompt itself. I'm not sure which of these two things did it, but it works now!

How to get pycassaShell working in windows?

EDIT: I got it working, I went into the pycassa directory and typed python pycassaShell but the 2nd part of my question (at the bottom there) is still valid: how do I run a script in pycassaShell?
I recently installed Cassandra and pycassa and followed the instruction from here.
They work fine, except I cant get pycassaShell to load. When I type pycassaShell at the command prompt, I get
'pycassaShell' is not recognized as an internal or external command,
operable program or batch file.
Do I need to set up a path for it?
Also, does anyone know if you can run ddl scripts using pycassaShell? It is for this reason that I want to try it out. At the moment, I'm doing all my ddl in the cassandra CLI, I'd like to be able to put it in a script to automate it.
You probably don't want to be running scripts with pycassaShell. It's designed more as an interactive environment to quickly try things out. For serious scripts, I recommend just writing a normal python script that imports pycassa and sets up the connection pool and column families itself; it should only be an extra 5 or so lines.
However, there is an (undocumented, I just noticed) optional -f or --file flag that you can use. It will essentially run execfile() on that script after startup completes, so you can use the SYSTEM_MANAGER and CF variables that are already set up in your script. This is intended primarily to be used as a prep script for your environment, similar to how you might use a .bashrc file (I don't know of a Windows equivalent).
Regarding DDL statements, I suggest you look at the SystemManager class.

Categories