VSCode. Pylance doesn't work via SSH connection - python

There is a problem: Pylance (IntelliSense) does not work on the remote server. At the same time it works locally. Pylance itself is installed both locally and on the server. Imports are just white and only "Loading..." pops up when I hover over it. "Go to definition" also doesn't work.
Have a such properties:
Python: 3.10.2;
Pylance: 2022.1.3;
Python extension: v2021.12.1559732655;
Remote - SSH: v0.70.0
VSCode: 1.63.2;
Local OS: Windows 10 Pro;
Remote OS: Ubuntu 20.04.3 LTS
Virtualenv as env;
I've already tried a bunch of options:
Installed other versions of Pylance;
Older versions of the Python extension itself;
Updated Python to the latest version from 3.8.10 to 3.10.2;
Changed the language server to Jedi and reverted to Pylance;
Reinstalled extensions, VSCode;
Recreated the environment with new python.
Added to the remote settings.json this settings:
"python.insidersChannel": "daily", "python.languageServer": "Pylance".
"Python: Show output" gives this output:
Experiment 'pythonaacf' is active
Experiment 'pythonTensorboardExperiment' is active
Experiment 'pythonSurveyNotification' is active
Experiment 'PythonPyTorchProfiler' is active
Experiment 'pythonDeprecatePythonPath' is active
> conda info --json
> ~/jupyter_env/bin/python ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py
> ~/.anaconda_backup/bin/conda info --json
Python interpreter path: ./jupyter_env/bin/python
> conda --version
> /bin/python ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py
> /bin/python2 ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py
> /bin/python3 ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py
> /bin/python3.10 ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py
> /usr/bin/python2 ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py
> /usr/bin/python3 ~/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/interpreterInfo.py
> ". /home/db/jupyter_env/bin/activate && echo 'e8b39361-0157-4923-80e1-22d70d46dee6' && python /home/db/.vscode-server/extensions/ms-python.python-2021.12.1559732655/pythonFiles/printEnvVariables.py"
Starting Jedi language server.
> ~/jupyter_env/bin/python -m pylint --msg-template='{line},{column},{category},{symbol}:{msg} --reports=n --output-format=text ~/data/qualityControl/core/data_verification/dataQualityControl.py
cwd: ~/
##########Linting Output - pylint##########
************* Module core.data_verification.dataQualityControl
18,53,error,syntax-error:non-default argument follows default argument (<unknown>, line 18)

Basically, the problem was that if a large workspace is selected in VSCode, it will try to index it all, and until it finishes, the highlighting won't turn on. In my case, I had several AWS buckets mounted and since there was about 100TB of data, the file indexing simply never finished. If I select a specific project folder, however, the problem disappears. So in case of such a problem, try to specify the working directory. Good luck!

This is fixed by setting the python.language.server to pylance
See GitHub issue 11

Related

How to run python script in background using Anaconda? ('nohup python -u test.py &' doesn't work!)

I have a simple python script test.py:
import time
import logging
logging.basicConfig(filename='app.log', filemode='w', level=logging.DEBUG)
i=0
while i<100:
i+=1
logging.info(i)
print(i)
time.sleep(1)
I want to run this script in background using anaconda. I tried :nohup python -u test.py &.
python keyword invokes anaconda on my machine. It seems that script is still linked to the terminal I used to run it. If I close the terminal, the execution stops and if I use 'exit' to close the terminal, the terminal turns black but doesn't close. If I close using 'X', the execution stops.
What is the correct way to trigger a python script to run on anaconda in background?
$ conda info
active environment : None
conda version : 4.9.2
conda-build version : 3.20.5
python version : 3.8.5.final.0
virtual packages : __win=0=0
__archspec=1=x86_64
base environment : F:\Automation\Anaconda3 (read only)
channel URLs : https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
platform : win-64
user-agent : conda/4.9.2 requests/2.24.0 CPython/3.8.5 Windows/10 Windows/10.0.17763
administrator : False
netrc file : None
offline mode : False
Terminal used to run script: Git, version: 2.29.2.windows.3
Use the python executable of your conda environment.
run conda info to get the path to the base environment:<base_environment_path>
to use specific env, use its name and then access the python bin executable:
nohup <base_environment_path>/envs/<env-name>/bin/python <script_name>.py
e.g.
nohup /home/ubuntu/anaconda3/envs/my-env/bin/python test.py > output.txt &
You will need to know the path to the python executable in the environment you want to execute the code in.
You can find that by running
conda info
in your desired environment. When you find the location of environment you just have to to do this:
nohup <absolute path to your anaconda environment>/bin/python <YOUR SCRIPT> > output.txt &
If you don't want to use conda info to find it you can just execute this
nohup <absolute path to your anaconda>/anaconda3/envs/<your environment>/bin/python <YOUR SCRIPT> > output.txt &

Configure AWS Cloud9 to use Anaconda Python Environment

I want AWS Cloud9 to use the Python version and specific packages from my Anaconda Python environment. How can I achieve this? Where should I look in the settings or configuration?
My current setup: I have an AWS EC2 instance with Ubuntu Linux, and I have configured AWS Cloud9 to work with the EC2 instance.
I have Anaconda installed on the EC2 instance, and I have created a conda Python3 environment to use, but Cloud9 always wants to use my Linux system's installed Python3 version.
I finally found something that forces AWS Cloud9 to use the Python3 version installed in my Anaconda environment on my AWS EC2 instance.
The instructions to create a custom AWS Cloud9 runner for Python are here:
{
"cmd" : ["/home/ubuntu/anaconda3/envs/ijackweb/bin/python3.6", "$file", "$args"],
"info" : "Running $project_path$file_name...",
"selector" : "source.py"
}
I just create a new runner and paste the above code in there, and Cloud9 runs my application with my Anaconda environment's version of Python3.
The only thing I don't understand about the above code is what the "selector": "source.py" line does.
After some testing, I realised that my previous answer prevents you being able to use the debugger. Building on #Sean_Calgary 's answer (which is better than my original answer), you can edit one of the in-built python runners (again, just replacing the python call with the full path to the conda env's python path), like so:
{
"script": [
"if [ \"$debug\" == true ]; then ",
" /home/tg/miniconda/envs/env-name/bin/python -m ikp3db -ik_p=15471 -ik_cwd=$project_path \"$file\" $args",
"else",
" /home/tg/miniconda/envs/env-name/bin/python \"$file\" $args",
"fi",
"checkExitCode() {",
" if [ $1 ] && [ \"$debug\" == true ]; then ",
" /home/tg/miniconda/envs/env-name/bin/python -m ikp3db 2>&1 | grep -q 'No module' && echo '",
" To use python debugger install ikpdb by running: ",
" sudo yum update;",
" sudo yum install python36-devel;",
" sudo pip-3.6 install ikp3db;",
" '",
" fi",
" return $1",
"}",
"checkExitCode $?"
],
"python_version": "python3",
"working_dir": "$project_path",
"debugport": 15471,
"$debugDefaultState": false,
"debugger": "ikpdb",
"selector": "^.*\\.(py)$",
"env": {
"PYTHONPATH": "$python_path"
},
"trackId": "Python3"
}
To do this, just click on 'runners' next to CWD in the bottom-right corner -> python3 -> edit runner -> save as 'env-name.run' in /.c9/runners (that save as should point you to the right directory by default).
N.B.
Replace env-name with the name of your environment throughout.
You will need the package for the debugger installed in your conda env. It's called ikp3db.
You may need to check the path to your conda envs executable python by activating the environment and running which python (his caught me out because my path ended in /python, not /python3.6, even though it's python 3.6 that's installed)
You could use a 'shell script' runner type. To do this you would:
create your conda env, with python3 and any packages etc you want in it. Call it py3env
create a directory to hold your runner scripts, something like $HOME/c9_runner_scripts
put a script in there called py3env_runner.sh runner with code like:
conda activate py3env
python ~/c9/my_py3_script.py
Then create a run configuration with the 'shell script' runner type and enter c9_runner_scripts/py3env_runner.sh
for me, on centos 7 the only way to execute with my conda python v 3.9.4 was to add a conda activate line to my .bash_profile like this:
conda activate /var/www/my_conda/python3.9
Then in Cloud 9 when I'm running my code under my conda python 3.9 env all is fine.
This is my simple python code which will print the current python version
import sys
print(sys.version)
Best.

cassandra-snapshotter: not found

i installed cassandra snapshotter using pip install cassandra_snapshotter. It's working fine if i run it on terminal with command
sudo cassandra-snapshotter --s3-bucket-name=vivek-bucket
--s3-base-path=cassandra --aws-access-key-id=XXXX --aws-secret-access-key=XXX backup --hosts=172.31.2.85 --user ubuntu
--sshkey=/home/ubuntu/XXXX.pem --cassandra-conf-path=/etc/dse/cassandra --use-sudo=yes --new-snapshot
when i tried same command with ansible it ends with error
"start": "2017-04-25 10:02:39.111333",
"stderr": "/bin/sh: 1: cassandra-snapshotter: not found",
"stderr_lines": [
"/bin/sh: 1: cassandra-snapshotter: not found"
]
- name: snapshot and backup
hosts: localhost
connection: local
become: yes
tasks:
- name: taking snapshot
shell: cassandra-snapshotter --s3-bucket-name=vivek-bucket --s3-base-path=cassandra --aws-access-key-id=XXXX --aws-secret-access-key=XXX backup --hosts=172.31.2.85 --user ubuntu --sshkey=/home/ubuntu/XXXX.pem --cassandra-conf-path=/etc/dse/cassandra --use-sudo=yes --new-snapshot
pip installs executables in it's own location. That location is probably not in the search path. You can either set the PATH environment variable in your ansible and extend it to include that location or you could just manually do a 'which cassandra_snapshotter' on the commandline and put the full path to the cassandra_snapshotter executable in your ansible.
Also: I don't think you are using any 'shell' features in that cassandra_snapshotter call. It's better to use https://docs.ansible.com/ansible/command_module.html when possible.

Activate Anaconda Python environment from makefile

I want to use a makefile to build my project's environment using a makefile and anaconda/miniconda, so I should be able to clone the repo and simply run make myproject
myproject: build
build:
#printf "\nBuilding Python Environment\n"
#conda env create --quiet --force --file environment.yml
#source /home/vagrant/miniconda/bin/activate myproject
If I try this, however, I get the following error
make: source: Command not found
make: *** [source] Error 127
I have searched for a solution, but [this question/answer(How to source a script in a Makefile?) suggests that I cannot use source from within a makefile.
This answer, however, proposes a solution (and received several upvotes) but this doesn't work for me either
( \
source /home/vagrant/miniconda/bin/activate myproject; \
)
/bin/sh: 2: source: not found
make: *** [source] Error 127
I also tried moving the source activate step to a separate bash script, and executing that script from the makefile. That doesn't work, and I assume for the a similar reason, i.e. I am running source from within a shell.
I should add that if I run source activate myproject from the terminal, it works correctly.
I had a similar problem; I wanted to create, or update, a conda environment from a Makefile to be sure my own scripts could use the python from that conda environment.
By default make uses sh to execute commands, and sh doesn't know source (also see this SO answer). I simply set the SHELL to bash and ended up with (relevant part only):
SHELL=/bin/bash
CONDAROOT = /my/path/to/miniconda2
.
.
install: sometarget
source $(CONDAROOT)/bin/activate && conda env create -p conda -f environment.yml && source deactivate
Hope it helps
You should use this, it's functional for me at moment.
report.ipynb : merged.ipynb
( bash -c "source ${HOME}/anaconda3/bin/activate py27; which -a python; \
jupyter nbconvert \
--to notebook \
--ExecutePreprocessor.kernel_name=python2 \
--ExecutePreprocessor.timeout=3000 \
--execute merged.ipynb \
--output=$< $<" )
I had the same problem. Essentially the only solution is stated by 9000. I have a setup shell script inside which I setup the conda environment (source activate python2), then I call the make command. I experimented with setting up the environment from inside Makefile and no success.
I have this line in my makefile:
installpy :
./setuppython2.sh && python setup.py install
The error messages is:
make
./setuppython2.sh && python setup.py install
running install
error: can't create or remove files in install directory
The following error occurred while trying to add or remove files in the
installation directory:
[Errno 13] Permission denied: '/usr/lib/python2.7/site-packages/test-easy-install-29183.write-test'
Essentially, I was able to set up my conda environment to use my local conda that I have write access. But this is not picked up by the make process. I don't understand why the environment set up in my shell script using 'source' is not visible in the make process; the source command is supposed to change the current shell. I just want to share this so that other people don't wast time trying to do this. I know autotoools has a way of working with python. But the make program is probably limited in this respect.
My current solution is a shell script:
cat py2make.sh
#!/bin/sh
# the prefix should be change to the target
# of installation or pwd of the build system
PREFIX=/some/path
CONDA_HOME=$PREFIX/anaconda3
PATH=$CONDA_HOME/bin:$PATH
unset PYTHONPATH
export PREFIX CONDA_HOME PATH
source activate python2
make
This seems to work well for me.
There were a solution for similar situation but it does not seems to work for me:
My modified Makefile segment:
installpy :
( source activate python2; python setup.py install )
Error message after invoking make:
make
( source activate python2; python setup.py install )
/bin/sh: line 0: source: activate: file not found
make: *** [installpy] Error 1
Not sure where am I wrong. If anyone has a better solution please share it.

GCE module in Ansible cannot find apache-libcloud although gce.py works

I installed ansible, apache-libcloud with pip. Also, I can use the gcloud cli and ansible works for any non-gce-related playbooks.
When using the gce module as a task to create instances in an ansible playbook, the following error occurs:
TASK: [Launch instances] ******************************************************
<127.0.0.1> REMOTE_MODULE gce instance_names=mm2 machine_type=f1-micro image=ubuntu-1204-precise-v20150625 zone=europe-west1-d service_account_email= pem_file=../pkey.pem project_id=fancystuff-11
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && echo $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889']
<127.0.0.1> PUT /var/folders/v4/ll0_f8lj7yl7yghb645h95q9ckfc19/T/tmpyDoPt9 TO /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce
<127.0.0.1> EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce; rm -rf /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/ >/dev/null 2>&1']
failed: [localhost -> 127.0.0.1] => {"failed": true, "parsed": false}
failed=True msg='libcloud with GCE support (0.13.3+) required for this module'
FATAL: all hosts have already failed -- aborting
And the site.yml of the playbook I wrote:
name: Create a sandbox instance
hosts: localhost
vars:
names: mm2
machine_type: f1-micro
image: ubuntu-1204-precise-v20150625
zone: europe-west1-d
service_account_email: xxx#developer.gserviceaccount.com
pem_file: ../pkey.pem
project_id: fancystuff-11
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}} service_account_email={{ service_account_email }}
pem_file={{ pem_file }} project_id={{ project_id }}
register: gce
The gce cloud module fails with the error message "ibcloud with GCE support (0.13.3+) required for this module".
However, running gce.py from the ansible github repo works. The python script finds the apache-libcloud library and prints a json with all running instances. Besides, pip install apache-libcloud states it is installed properly.
Is there anything I am missing like an environment variable that points to the python libraries (PYTHONPATH)?
UPDATE 1:
I included the following task before the gce task:
- name: install libcloud
pip: name=apache-libcloud
This also does not affect the behavior nor prevents any error messages.
Update 2:
I added the following task to inspect the available PYTHONPATH:
- name: Getting PYTHONPATH
local_action: shell python -c 'import sys; print(":".join(sys.path))'
register: pythonpath
- debug:
msg: "PYTHONPATH: {{ pythonpath.stdout }}"
The following is returned:
PYTHONPATH: :/usr/local/lib/python2.7/site-packages/setuptools-17.1.1-py2.7.egg:/usr/local/lib/python2.7/site-packages/pip-7.0.3-py2.7.egg:/usr/local/lib/python2.7/site-packages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/site-packages:/Library/Python/2.7/site-packages
UPDATE 3:
I introduced my own test.py script as a task which executes the same apache-libcloud imports as the gce ansible module. The script imports just fine!!!
Setting the PYTHONPATH fixes the issue. For example:
$ export PYTHONPATH=/usr/local/lib/python2.7/site-packages/
I'm using OSX and I solved this for myself. Short answer: install ansible with pip. (rather than e.g. brew)
I inspected the PYTHONPATH that Ansible sets runtime and it looked like it had nothing to do whith my normal system PYTHONPATH. E.g. for me, my system PYTHONPATH was empty, and setting that like e.g. mlazarov suggested didn't make any difference. I made ansible print the PYTHONPATH it uses runtime, and it looked like this:
ok: [localhost] => {
"msg": "PYTHONPATH: :/usr/local/Cellar/ansible/1.9.4/libexec/lib/python2.7/site-packages:/usr/local/Cellar/ansible/1.9.4/libexec/vendor/lib/python2.7/site-packages:/Library/Frameworks/Python.framework/Versions/3.4/lib/python34.zip:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/plat-darwin:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/lib-dynload:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages"
}
So there's only ansible's own site-packages and some strange Python3 installations (I'm using python2.7)
Something in this discussion made me think it might be a problem with the ansible installation, my ansible was installed with brew. I reinstalled it globally with pip (simply running sudo pip install ansible), and that fixed the problem. Now the PYTHONPATH ansible prints looks much better, with my virtualenv python installation in the beginning, and no more "libcloud with GCE support (0.13.3+) required for this module".
I was able to resolve the issue by setting the PYTHONPATH environment variable (export PYTHONPATH=/path/to/site-packages) with the current site-packages folder. Apparently, ansible establishes its own environment during module execution and ignores any paths available in python except the paths from the environment variable PYTHONPATH.
I find this a peculiar behavior which is not documented on the ansible websites.
I have a similar environment setup. I found some information at the bottom of this section: https://github.com/jlund/streisand#prerequisites
Essentially there's some magic files you can update so the brew'd ansible will add a folder to search for packages:
mkdir -p ~/Library/Python/2.7/lib/python/site-packages
echo '/usr/local/lib/python2.7/site-packages' > ~/Library/Python/2.7/lib/python/site-packages/homebrew.pth
Hope that fixes it for you!
In my case it was the case of:
pip install apache-libcloud

Categories