Qt unable to connect to virtual framebuffer with tox - python

yml version works (both in CI and locally inputting script commands manually)
staging:
stage: test
image: foobar/python36-qt
script:
- pip install things...
- export PATH=/root/.local/bin:$PATH
- export DISPLAY=":$(( ( RANDOM % 250 ) + 1 ))"
- Xvfb $DISPLAY -screen 0 1920x1080x16 &
- pytest --ignore src/foobar/tests/gui/functional
tox version does not work (I run it locally with python3 -m tox -rc tox.ini -e foobar -v)
[testenv:foobar]
whitelist_externals =
Xvfb
sh
setenv =
PATH={env:PATH}:/root/.local/bin
DISPLAY=":3"
QT_DEBUG_PLUGINS=1
deps =
pytest
install_command = pip install --extra-index-url https://pypi-ext.foobar.com/simple {opts} {packages}
commands =
sh -c 'Xvfb ":3" -screen 0 1920x1080x16 &'
pytest -sv tests --ignore tests/gui/functional
initial error:
foobar run-test: commands[2] | sh -c 'Xvfb ":3" -screen 0 1920x1080x16 &'
[17538] /home/localadmin/Documents/foobar/src/foobar$ /bin/sh -c 'Xvfb ":3" -screen 0 1920x1080x16 &'
foobar run-test: commands[3] | pytest -sv tests --ignore tests/gui/functional
[17540] /home/localadmin/Documents/foobar/src/foobar$ /home/localadmin/Documents/foobar/src/foobar/.tox/foobar/bin/pytest -sv tests --ignore tests/gui/functional
_XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed
_XSERVTransMakeAllCOTSServerListeners: server already running
(EE)
Fatal server error:
(EE) Cannot establish any listening sockets - Make sure an X server isn't already running(EE)
resulting error:
qt.qpa.xcb: could not connect to display ":3"
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
The first thing I need to solve is why I constantly get _XSERVTransMakeAllCOTSServerListeners: server already running. Before I select a number X in Display=":X", I check /tmp and /tmp/.X11-unix to make sure no X screen exists, and remove it if it does.
I thought maybe I need to run them in the same command (I figure the [17538] and [17540] are processes (?)). No dice

Related

Oracle Instant Client failing on Ubuntu-based agent despite correct TNS_ADMIN path

I am attempting to perform an SQL query using oracle-instantclient-basic-21.5 through an Ubuntu 20.04.3 agent hosted by Azure Devops. The query itself (which reads: python query_data) works when I am running it on my own machine with specs:
Windows 10
Path=C:\oracle\product\11.2.0.4\client_x64\bin;...;...
TNS_ADMIN=C:\oracle\product\tns
Python 3.8.5 using sqlalchemy with driver="oracle" and dialect = "cx_oracle"
I am running the following:
pool:
vmImage: 'ubuntu-latest'
steps:
- script: |
sudo apt install alien
displayName: 'Install alien'
- script: |
sudo alien -i oracle-instantclient-basic-21.5.0.0.0-1.x86_64.rpm
displayName: 'Install oracle-instantclient-basic'
- script: |
sudo sh -c 'echo /usr/lib/oracle/21/client64/ > /etc/ld.so.conf.d/oracle-instantclient.conf'
sudo ldconfig
displayName: 'Update the runtime link path'
- script: |
sudo cp tns/TNSNAMES.ORA /usr/lib/oracle/21/client64/lib/network/admin
sudo cp tns/ldap.ORA /usr/lib/oracle/21/client64/lib/network/admin
sudo cp tns/SQLNET.ORA /usr/lib/oracle/21/client64/lib/network/admin
sudo cp tns/krb5.conf /usr/lib/oracle/21/client64/lib/network/admin
displayName: 'Copy and paste correct TNS content'
- task: UsePythonVersion#0
inputs:
versionSpec: '3.8'
- script: |
export ORACLE_HOME=/usr/lib/oracle/21/client64
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export TNS_ADMIN=$ORACLE_HOME/lib/network/admin
python query_data
displayName: 'Attempt to run python script with locally valid environment variables'
with the error TNS:could not resolve the connect identifier specified. What I have done:
Checked that the locations I am referring to match the actual oracle-instantclient-basic installation
Copied the TNSNAMES.ORA, ldap.ORA etc. that I am using on my own machine and verified that they are present in the desired location (/usr/lib/oracle/21/client64/lib/network/admin)
Checked that TNS_ADMIN points to the correct path (/usr/lib/oracle/21/client64/lib/network/admin)
The sql query does not complain about a missing client, so it is aware of the installation. Why doesn't it read the TNS_ADMIN path or its contents correctly?
On Linux change the file names to lowercase tnsnames.ora and sqlnet.ora and ldap.ora. If you run, say, strace sqlplus a/b#c you can see that is looks for the lowercase names.
With Instant Client, don't set ORACLE_HOME.
There's no need to set LD_LIBRARY_PATH since ldconfig is used
There's no need to set TNS_ADMIN since you have moved the configuration files to the default location.
You can simplify your install by using alien -i --scripts oracle-instantclient-basic-21.5.0.0.0-1.x86_64.rpm This will automatically do the ldconfig step for you.
Hopefully you have installed the Python cx_Oracle module somehow.

Auto-chose platform (or other) condition in tox sections

I want to specifically run a certain tox section which then auto-decides on the specific platform.
The example code-snippet below works fine if I just ran tox -e ALL. Then the platform condition nicely sects out the correct platform.
However, I want to only adress and run a specific section like for instance something like tox -e other (not tox -e other-win, other-linux) and then have tox auto-chosing the corresponding platform (or any other) condition.
I don't know if this way of setting up conditions in tox is not possible, or if I'm missing something.
[tox]
skipsdist = true
[testenv:systest-{win, linux}]
platform =
linux: linux
win: win|msys
whitelist_externals =
win: cmd
linux: sh
commands =
win: cmd /r echo {env:OS}
linux: sh -c echo {env:OS}
[testenv:other-{win, linux}]
platform =
linux: linux
win: win|msys
whitelist_externals =
win: cmd
linux: sh
commands =
win: cmd /r echo {env:OS}
linux: sh -c echo {env:OS}
You could give the tox-factor plugin a try.
For example:
tox.ini
[tox]
envlist =
alpha-{redmond,tux}
bravo-{redmond,tux}
requires =
tox-factor
skipsdist = true
[testenv]
commands =
python -c 'import sys; print("platform", sys.platform)'
platform =
redmond: win32
tux: linux
This gives the following four environments:
$ tox --listenvs
alpha-redmond
alpha-tux
bravo-redmond
bravo-tux
That can be selected according to the factors:
$ tox --listenvs --factor tux
alpha-tux
bravo-tux
$ tox --listenvs --factor alpha
alpha-redmond
alpha-tux
And then run like this (for example on a Linux platform):
$ tox --factor bravo
bravo-tux run-test-pre: PYTHONHASHSEED='1770792708'
bravo-tux run-test: commands[0] | python -c 'import sys; print("platform", sys.platform)'
platform linux
________________________________________________ summary ________________________________________________
SKIPPED: bravo-redmond: platform mismatch ('linux' does not match 'win32')
bravo-tux: commands succeeded
congratulations :)
References:
https://github.com/tox-dev/tox/issues/1338
https://pypi.org/project/tox-factor/

How can I use a Docker container as a virtualenv for running Python tests from my IDE?

Don't get me wrong, virtualenv (or pyenv) is a great tool, and the whole concept of virtual environments is a great improvement on developer environments, mitigating the whole Snowflake Server anti-pattern.
But nowadays Docker containers are everywhere (for good reasons) and it feels odd having your application running on a container but also setting up a local virtual environment for running tests and such in the IDE.
I wonder if there's a way we could leverage Docker containers for this purpose?
Summary
Yes, there's a way to achieve this. By configuring a remote Python interpreter and a "sidecar" Docker container.
This Docker container will have:
A volume mounted to your source code (henceforth, /code)
SSH setup
SSH enabled for the root:password credentials and the root user allowed to login
Get the sidecar container ready
The idea here is to duplicate your app's container and add SSH abilities to it. We'll use docker-compose to achieve this:
docker-compose.yml:
version: '3.3'
services:
dev:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- 127.0.0.1:9922:22
volumes:
- .:/code/
environment:
DEV: 'True'
env_file: local.env
Dockerfile.dev
FROM python:3.7
ENV PYTHONUNBUFFERED 1
WORKDIR /code
# Copying the requirements, this is needed because at this point the volume isn't mounted yet
COPY requirements.txt /code/
# Installing requirements, if you don't use this, you should.
# More info: https://pip.pypa.io/en/stable/user_guide/
RUN pip install -r requirements.txt
# Similar to the above, but with just the development-specific requirements
COPY requirements-dev.txt /code/
RUN pip install -r requirements-dev.txt
# Setup SSH with secure root login
RUN apt-get update \
&& apt-get install -y openssh-server netcat \
&& mkdir /var/run/sshd \
&& echo 'root:password' | chpasswd \
&& sed -i 's/\#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Setting up PyCharm Professional Edition
Preferences (CMD + ,) > Project Settings > Project Interpreter
Click on the gear icon next to the "Project Interpreter" dropdown > Add
Select "SSH Interpreter" > Host: localhost, Port: 9922, Username: root > Password: password > Interpreter: /usr/local/bin/python, Sync folders: Project Root -> /code, Disable "Automatically upload..."
Confirm the changes and wait for PyCharm to update the indexes
Setting up Visual Studio Code
Install the Python extension
Install the Remote - Containers extension
Open the Command Pallette and type Remote-Containers, then select the Attach to Running Container... and selecet the running docker container
VS Code will restart and reload
On the Explorer sidebar, click the open a folder button and then enter /code (this will be loaded from the remote container)
On the Extensions sidebar, select the Python extension and install it on the container
When prompet on which interppreter to use, select /usr/local/bin/python
Open the Command Pallette and type Python: Configure Tests, then select the unittest framework
TDD Enablement
Now that you can run your tests directly from your IDE, use it to try out Test-Driven-Develop! One of its key points is a fast feedback loop, and not having to wait for the full test suite to finish execution just to see if your new test is passing is great! Just write it and run it right away!
Reference
The contents of this answer are also available in this GIST.

Running Windows Server Core in Docker Container

my Linux containers run like a charm, but the change to Windows Server in my Docker container makes me crazy!
My Docker file doesn't build although it is as simple as my linux Dockerfiles:
FROM microsoft/windowsservercore
#Install Chocolately
RUN #powershell -NoProfile -ExecutionPolicy unrestricted -Command "(iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex)"
ENV PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin
#Install python
RUN choco install -fy python2
RUN refreshenv
ENV PYTHONIOINPUT=UTF-8
RUN pip install -y scipy
Some times I was able to Chocolately which results in a fail to install scipy via PIP or curiously starting 5 minutes ago, even the installation of chocolately fails:
iwr : The remote name could not be resolved: 'chocolatey.org'
At line:1 char:2
+ (iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:Htt
pWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShe
ll.Commands.InvokeWebRequestCommand
Here are some specs on my Docker for Windows Installation:
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 3
Server Version: 1.13.0
Storage Driver: windowsfilter
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: l2bridge l2tunnel nat null overlay transparent
Swarm: inactive
Default Isolation: hyperv
Kernel Version: 10.0 14393 (14393.693.amd64fre.rs1_release.1612
Operating System: Windows 10 Education
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 7.903 GiB
Name: xxxx
ID: deleted
Docker Root Dir: C:\ProgramData\Docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: -1
Goroutines: 18
System Time: 2017-01-31T16:14:36.3753129+01:00
EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Any ideas?
I was unable to get refreshenv to work, so I used multiple powershell sessions, I've included in case it is useful to someone in the future.
#Install Chocolately, Python and Python Package Manager, each PowerShell session will reload the PATH from previous step
RUN #powershell -NoProfile -ExecutionPolicy unrestricted -Command "iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex"
RUN #powershell -NoProfile -ExecutionPolicy unrestricted -Command "choco install -y python3"

Trouble activating virtualenv on server via Fabric

I am trying to run some Django management commands via Fabric on my staging server.
The problem is it seems Fabric is not able to activate the virtualenv and thus using system python/libs when executing the commands.
On the server the Django app is run using a virtualenv (no, I don' use virtualenvwrapper yet...)
Using Fabric (1.0.1) a command might look like this when run from my box:
The fabfile method:
def collectstatic():
require('settings', provided_by=[production, staging])
with settings(warn_only=True):
run('source %(env_path)s/bin/activate && python %(repo_path)s/%(project_name)s/configs/%(settings)s/manage.py collectstatic --noinput -v0' % env)
The output:
$ fab staging master collectstatic
[myserver.no] Executing task 'master'
[myserver.no] Executing task 'collectstatic'
[myserver.no] run: source /home/newsapps/sites/mysite/env/bin/activate && python /home/newsapps/sites/mysite/repository/mysite/configs/staging/manage.py collectstatic --noinput -v0
[myserver.no] Login password:
[myserver.no] out: Unknown command: 'collectstatic'
[myserver.no] out: Type 'manage.py help' for usage.
I know of course that the Django command collectstatic does not exist in versions prior to 1.3 which leads med to think that system python (which has Django 1.2) is beeing used.
My fabfile/project layout is based on the great fabfile of the Tribapps guys
So I created a fabric method to test pythonversion:
def pythonver():
require('settings', provided_by=[production, staging])
with settings(warn_only=True):
run('source %(env_path)s/bin/activate && echo "import sys; print sys.path" | python ' % env)
When run it gives the following output:
$ fab staging master pythonver
[myserver.no] Executing task 'master'
[myserver.no] Executing task 'pythonver'
[myserver.no] run: source /home/newsapps/sites/mysite/env/bin/activate && echo "import sys; print sys.path" | python
[myserver.no] Login password:
[myserver.no] out: ['', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/dist-packages', '/usr/lib/pymodules/python2.6', '/usr/lib/pymodules/python2.6/gtk-2.0',
As you can see it uses system python and not my virtualenv located in home/newsapps/sites/mysite/env
But if I run this command directly on the server
source /home/newsapps/sites/mysite/env/bin/activate && echo "import sys; print sys.path" | python
.. then it outputs the right paths from the virtualenv
What am I doing wrong since the commands are not run with the python from my virtualenv using Fabric?
You should call the python version from your virtualenv bin directory, then you will be sure it uses the virtualenv's version of python.
/home/newsapps/sites/mysite/env/bin/python /home/newsapps/sites/mysite/repository/mysite/configs/staging/manage.py collectstatic --noinput -v0
I wouldn't bother with activating the virtualenv, just give the full path to the virtualenv's python interpreter. That will then use the correct PYTHONPATH, etc.
I had the same problem. Couldn't solve it the easy way. So I just used the full path to the python bin file inside the virtualenv. I'm not a pro in Python, but I guess it's the same thing in the end.
It goes something like this in my fab file:
PYTHON = '/home/dudus/.virtualenvs/pai/bin/python'
PIP = '/home/dudus/.virtualenvs/pai/bin/pip'
def update_db():
with cd(REMOTE_DIR + 'application/'):
run('%s ./manage.py syncdb --settings="%s"' %
(PYTHON, SETTINGS)) # syncdb
run('%s ./manage.py migrate --settings="%s"' %
(PYTHON, SETTINGS)) # south migrate
This will work perfectly :)
from __future__ import with_statement
from fabric.api import *
from contextlib import contextmanager as _contextmanager
env.hosts = ['servername']
env.user = 'username'
env.directory = '/path/to/virtualenvs/project'
env.activate = 'source /path/to/virtualenvs/project/bin/activate'
#_contextmanager
def virtualenv():
with cd(env.directory):
with prefix(env.activate):
yield
def deploy():
with virtualenv():
run('pip freeze')
This approach worked for me, you can apply this too.
from fabric.api import run
# ... other code...
def install_pip_requirements():
run("/bin/bash -l -c 'source venv/bin/activate' "
"&& pip install -r requirements.txt "
"&& /bin/bash -l -c 'deactivate'")
Assuming venv is your virtual env directory and add this method wherever appropriate.

Categories