My program throws an error when trying to run my function - python

I tried to make a function that checks if my Ubuntu Server has a specific package installed via apt list, when the condition isn't met it, in theory, the program should install any necessary dependencies for the other piece of software to work. Here's a function I wrote:
# Docker Configuration Tool
def DCT():
cache = apt.Cache()
if cache['docker-ce'].is_installed:
print("Docker and Docker-compose are installed on this system...")
print("If you don't have MySQL Server installed on your system use Docker to prepare and configure your Server")
__run_file = [ BASH COMMANDS ]
OS_MCE(__run_file)
else:
print("Docker and Docker-compose are not installed on this system!")
print("Preparing Environment for the Installation...\n")
__install_docker = [ BASH COMMANDS ]
OS_MCE(__install_docker)
An error when trying to run this function:
Traceback (most recent call last):
File "MIM.py", line 304, in <module>
NSWIT(True)
File "MIM.py", line 297, in NSWIT
Menu()
File "MIM.py", line 257, in Menu
DCT()
File "MIM.py", line 117, in DCT
if cache['docker-ce'].is_installed:
File "/usr/lib/python3/dist-packages/apt/cache.py", line 305, in __getitem__
raise KeyError('The cache has no package named %r' % key)
KeyError: "The cache has no package named 'docker-ce'"

Try changing your if expression to check that the docker-ce key is in the cache map before trying to access it. If there isn't a key with that name, it makes sense to assume that the package isn't installed. So like this:
# Docker Configuration Tool
def DCT():
cache = apt.Cache()
if 'docker-ce' in cache and cache['docker-ce'].is_installed:
print("Docker and Docker-compose are installed on this system...")
print("If you don't have MySQL Server installed on your system use Docker to prepare and configure your Server")
__run_file = [ BASH COMMANDS ]
OS_MCE(__run_file)
else:
print("Docker and Docker-compose are not installed on this system!")
print("Preparing Environment for the Installation...\n")
__install_docker = [ BASH COMMANDS ]
OS_MCE(__install_docker)
Error messages are your friend. Read them carefully. In this case, the error was telling you precisely what was wrong, and with that, it is obvious how to fix it once you've been doing this for a while.
This is a good opportunity to learn about and understand short-circuited evaluation of logical expressions. The first clause of your conditional expression insures that the second clause isn't evaluated if it will throw the error you were seeing (well, Duh!, right?)...but if you don't fully understand why that is, it's a good thing to clearly understand. Maybe see: https://pythoninformer.com/python-language/intermediate-python/short-circuit-evaluation/

Related

Same docker container working differently on different machines

I created a docker container of a Python program and I need to run it on a virtual machine.
I prepared a Dockerfile to create the container, where I launch the main script by using:
CMD python3 /home/project_name_folder/script_00.py
built it as usual:
docker build -t registry_address/image_name -f folder_name/Dockerfile_01 .
and pushed the image to a registry. Then, I pulled and ran the image on the VM by:
docker run -p 5555:8050 registry_address/image_name
The problem is that, on the VM, I'm getting an error which never occurs when I run a container generated from the same image on my PC.
The error looks something like this:
Traceback (most recent call last):
File "/home/project_name_folder/script_02.py", line 188, in <module>
infile = open(sys.argv[1], 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'temp/3004601723038773'
So, script_00.py is executed, creates the file temp/3004601723038773, and calls another script_02 passing the file name as an argument. However, the container on the VM seems to be unable to find the corresponding file.
What surprises me is that this is never happening when running the container on my local computer, so I assume it's not due to the code itself. Can it be related to the docker version, or something similar?
P.S. sorry if the terminology I'm using is not correct, I'm a beginner when it comes to Docker!

How to run paramiko demo_server.py?

https://raw.githubusercontent.com/paramiko/paramiko/master/demos/demo_server.py
I see the above demo_server of paramiko. But I don't see the instructions on how to run it. I run the following ./demo_server.py command. But once I run ssh robey#127.0.0.1 -p 2200, the server fails. Could anybody let me know the complete steps on how to run this example? Thanks.
$ python3 ./demo_server.py
Read key: 60733844cb5186657fdedaa22b5a57d5
Listening for connection ...
Got a connection!
*** Caught exception: <class 'ImportError'>: Unable to import a GSS-API / SSPI module!
Traceback (most recent call last):
File "./demo_server.py", line 140, in <module>
t = paramiko.Transport(client, gss_kex=DoGSSAPIKeyExchange)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/paramiko/transport.py", line 445, in __init__
self.kexgss_ctxt = GSSAuth("gssapi-keyex", gss_deleg_creds)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/paramiko/ssh_gss.py", line 107, in GSSAuth
raise ImportError("Unable to import a GSS-API / SSPI module!")
ImportError: Unable to import a GSS-API / SSPI module!
$ ssh robey#127.0.0.1 -p 2200
kex_exchange_identification: read: Connection reset by peer
I managed to get it to work, make sure to go through these steps (Thank you Sandeep for the pip insight), chances are you may be missing the Kerberos dependencies:
You may need to perform pip install gssapi in the CLI if that has not already been done (I'm using the Windows Command Prompt, Linux/WSL might need pip3 instead depending on your version of python)
From there you will need to import the gssapi library into the code at the top with the other imported libraries, so just call
import gssapi in demo_server.py
After running demo_server.py again, the CLI should eventually say something like it is missing files located in Program Files\ MIT\ Kerberos\ bin, as Kerberos is a dependency for gssapi, you can install it from here:
https://web.mit.edu/KERBEROS/dist/
Make sure you do custom install if you're not sure where it will be downloaded, so that you can set up the file location where the CLI says is missing in step 3 (Should automatically say Program Files\ MIT). I unchecked the boxes for auto-start and tickets, but not sure as to what your preferences may be. After all that, a computer restart is required.

Python module import failure in Jenkins

I have a project I'm trying to test and run on Jenkins. On my machine it works fine, but when I try to run it in Jenkins, it fails to find a module in the workspace.
In the main workspace directory, I run the command:
python xtests/app_verify_auto.py
And get the error:
+ python /home/tomcat7/.jenkins/jobs/exit103/workspace/xtests/app_verify_auto.py
Traceback (most recent call last):
File "/home/tomcat7/.jenkins/jobs/exit103/workspace/xtests/app_verify_auto.py", line 19, in <module>
import exit103.data.db as db
ImportError: No module named exit103.data.db
Build step 'Execute shell' marked build as failure
Finished: FAILURE
The directory exit103/data exists in the workspace and is a correct path, but python can't seem to find it.
This error exists both with and without virtualenv.
It's may caused by your PATH setting not right in jenkins environment.In fact , the environments for your default user and jenkins-user are not the same.
You may try to find what are the PATH and PYTHONPATH in your jenkins-user environments .
Try to run "shell commands" in jenkins "echo $path" and so on to see what's them are.
In most of time , you need to set the PATH by yourself.
You may reference this answer.
Jenkins: putting my Python module on the PYTHONPATH
Faced the same issue.
For others who are reading this, Run the build in your master node. It fixed the problem for me.
Running the build in the slave node doesn't give proper access to all the python modules and other commands such as jq to the workspace.

Python Evdev binding for OpenWrt

Good day,
I'm a student and I would just like to ask for a minute of your time.
I'm working on a barcode reader connected via USB port to a board name Arduino Yun. This board runs a version of embedded linux derived from OpenWrt using a microprocessor named Atheros AR9331
I would like to ask you, what's necessary to make the Python Evdev binding (python-evdev.readthedocs.org/en/latest/), to be able to run in this type of MIPS microarchitecture? At the momento, it's only for Ubuntu and ArchLinux.
I'm kind of guessing that cross compilation would be needed, or the indication of the usage of a specific C compiler inside this linux.
The current python version supported for OpenWrt is 2.7.3
I already know , if you compile C code in your PC, the resulting executable will only run in this type of architecture. If you use that compiled program inside the microprocessor, it wont work.
I've used this binding without trouble within ubuntu in my PC. I followed the instructions, python setup.py install, with a previous installation of setuptools, and it worked just fine.
But regarding OpenWrt, this was not the case.
The python script I'm using requires this library within the first line of code in order to reach the data from the device (it works like a keyboard /dev/input/event0):
#!/usr/bin/env python
from evdev import InputDevice, ecodes, list_devices
from select import select
I've seen suggestions of copying the entire library inside the arduino, and run the script inside the same folder. But it doesn't work, since the evdev module has files created with the architecture of the PC and not the MIPS.
So, what are the messages displayed for the error?
If you run python setup.py install in Openwrt to try to install the evdev binding, this appears on screen:
File "setup.py", line 10, in <module>
from setuptools.command.develop import develop
ImportError: No module named setuptools.command.develop
It's obvious from here that you need the module aforementioned. So, I tried to install it with this script (pypi.python.org/pypi/setuptools):
python ez_setup.py
And the output shows this:
Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-11.3.1.zip
Traceback (most recent call last):
File "ez_setup.py", line 332, in <module>
sys.exit(main())
File "ez_setup.py", line 327, in main
downloader_factory=options.downloader_factory,
File "ez_setup.py", line 287, in download_setuptools
downloader(url, saveto)
File "ez_setup.py", line 209, in download_file_curl
_clean_check(cmd, target)
File "ez_setup.py", line 169, in _clean_check
subprocess.check_call(cmd)
File "/usr/lib/python2.7/subprocess.py", line 511, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['curl','https://pypi.python.org/packages/source/s/setuptools/setuptools-11.3.1.zip', '--silent', '--output', '/mnt/sda1/evdev-0.4.6/setuptools-11.3.1.zip']' returned non-zero exit status 60
I pressume this output is due to the fact that pypi doesn't exist for the python 2.7.3 in OpenWrt , only for newer versions and other architectures. Evedv binding is requiring the setuptools module in order to make things easier and standard, but if the binding is not supported for the target architecture, what's needed to be able to use it anyways?
Thanks for your time,
Good day everyone,
The solution was provided by Georgi Valkov. He is the creator of the python-evdev binding. I contacted him directly, and he was so kind that he cross compiled a version for the OpenWrt / Yun .
You can install the package using the openwrt package manager - opkg. The installation process is along the lines of:
$ opkg update
$ opkg install /path/to/python-evdev_0.4.7-1_ar71xx.ipk
To verify that the install was successful:
$ opkg files python-evdev
/usr/lib/python2.7/site-packages/evdev-0.4.7-py2.7.egg-info
/usr/lib/python2.7/site-packages/evdev/genecodes.py
/usr/lib/python2.7/site-packages/evdev/ff.py
/usr/lib/python2.7/site-packages/evdev/_input.so
/usr/lib/python2.7/site-packages/evdev/device.py
/usr/lib/python2.7/site-packages/evdev/events.py
/usr/lib/python2.7/site-packages/evdev/__init__.py
/usr/lib/python2.7/site-packages/evdev/ecodes.py
/usr/lib/python2.7/site-packages/evdev/_ecodes.so
/usr/lib/python2.7/site-packages/evdev/util.py
/usr/lib/python2.7/site-packages/evdev/uinput.py
/usr/lib/python2.7/site-packages/evdev/_uinput.so
This works just fine. Thanks.
PS. If someone needs the file, please contact me. Georgi sent me this address, but I didn't download the file from there because he sent it to me over email.
https://github.com/gvalkov/openwrt-packages-yun/blob/master/lang/python-evdev/Makefile
In the output, you can see that curl returned the status code 60. According to man curl
60 Peer certificate cannot be authenticated with known CA certifiā€
cates.
According to the setuptools page, you can instead use python ez_setup.py --insecure but obviously do that at your own risk. Alternatively you could do what the advanced instructions say and manually download the setuptools tarball, verify its md5 hash yourself, and install it using its setup.py .

Paster requires to be run from program directory?

Im running a solaris server which uses supervisor to monitor some Python applications.
Previously, I could run the command:
paster serve /opt/pyapps/menuadmin/prod.ini
from any directory on the server. There were some recent issues and the /opt folder was restored from a previous backup. This folder contained all of the applications including supervisor.
Now we are facing issues where supervisor will not start the applications because of "version conflicts" in Pylons.
This is where it gets weird and it makes no sense why these errors would occur.
If I run the paster command from outside of the program directory, it will throw the version conflict error. eg:
cd /
paster serve /opt/pyapps/menuadmin/prod.ini
Traceback (most recent call last):
File "/opt/csw/bin/paster", line 8, in <module>
load_entry_point('PasteScript==1.7.5', 'console_scripts', 'paster')()
File "/opt/csw/lib/python2.6/site-packages/PasteScript-1.7.5-py2.6.egg/paste/script/command.py", line 93, in run
commands = get_commands()
File "/opt/csw/lib/python2.6/site-packages/PasteScript-1.7.5-py2.6.egg/paste/script/command.py", line 135, in get_commands
plugins = pluginlib.resolve_plugins(plugins)
File "/opt/csw/lib/python2.6/site-packages/PasteScript-1.7.5-py2.6.egg/paste/script/pluginlib.py", line 82, in resolve_plugins
pkg_resources.require(plugin)
File "/opt/csw/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/pkg_resources.py", line 626, in require
File "/opt/csw/lib/python2.6/site-packages/setuptools-0.6c9-py2.6.egg/pkg_resources.py", line 528, in resolve
pkg_resources.VersionConflict: (Pylons 0.9.7 (/opt/csw/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg), Requirement.parse('Pylons>=0.10'))
But if I run the command from inside the program directory, it will run fine. eg:
cd /opt/pyapps/menuadmin/
paster serve /opt/pyapps/menuadmin/prod.ini
Starting server in PID 29902.
serving on http://127.0.0.1:3002
I absolutely cannot get my head around why this would happen!
Any thoughts or comments at all are appreciated!!!!
Based upon what you have said it seems you are running two different version of paster. The first version is running the older Pylons package 0.9.7, whilst the second has the more up to date version that meets or exceeds your app's requirements.
What I would do is first check which version of paster you are running. From outside of the project just run:
which paster
Then run the same command again within the project directory and compare the results. I suspect that you will find that the paths differ. If that is the case then all you need to do is update the version of pylons for the first version, which I'm guessing is the global install.
However, as others have commented it would be better to run apps within virtualenv, especially if as you seem to indicate you have multiple virtualenv and thus multiple projects. Trust me when I say it will save you from loads of headaches later on, from someone that didn't do this originally.

Categories