Upgrade pyramid, SQLAlchemy, zope and rebuild Python project - python

I have inherited a Python REST API that runs on Ubuntu.
My main goal is to update these Python components to the latest releases, e.g. zope is now at 2.0.
It uses Python 2.7, Pyramid 1.4.1, zope 0.6, transaction 1.3, SQLAlchemy 0.7.9, WebError 0.10.3, and uses nginx as the web server.
Oh, and it uses cx_Oracle to connect to the Oracle instance.
The project (and other items) are in a folder called rest_api, where I can see setup.py, and some other custom setups, setup_prod.py, etc.
I went to /usr/local/lib/python-2.7/sites-packages and I tried running "pip install --upgrade [package_name]" and the command completes successfully for each package.
Is this all I need to do, or do I have to rebuild the project with setup*.py?
I found some notes that showed 2 commands that look like what I want -
rebuild_cmd = "cd %s/python/rest_api/; /usr/bin/env python setup_prod.py build" % current_dir
install_cmd = "cd %s/python/rest_api/; sudo /usr/bin/env python setup_prod.py install" % current_dir
...but when I try running "python setup_prod.py build" from the directory, with or without sudo, I get a traceback error.
To summarize -
How do I upgrade the python packages like zope, SQLAlchemy, Pyramids, etc. to the latest release?
Do I need to rebuild the project if I am only upgrading the python packages from above?
Without knowing the program details, is there a "basic" python build sequence that I can try, e.g. run setup.py build, then setup.py install, or something else?

It uses Python 2.7, Pyramid 1.4.1, zope 0.6, transaction 1.3, SQLAlchemy 0.7.9, WebError 0.10.3
How do you know this? Find the place where any of these versions are mentioned (I guess they should be mentioned somewhere in setup_prod.py), change them to what you want, build the project and check if the app works with new dependencies.
...but when I try running "python setup_prod.py build" from the directory, with or without sudo, I get a traceback error.
Please show your traceback.

Related

Force Dataflow workers to use Python 3?

I have a simple batch Apache Beam Pipeline. When run locally - DirectRunner works fine, but with DataflowRunner it fails to install 1 dependency from requirements.txt. The reason is that the specific package is for Python 3, and the workers are (apparently) running the pipeline with Python 2.
The pipeline is done and working fine locally (DirectRunner) with Python 3.7.6. I'm using the latest Apache Beam SDK (apache-beam==2.16.0 in my requirements.txt).
One of the modules required by my pipeline is:
from lbcapi3 import api
So my requirements.txt sent to GCP has a line with:
lbcapi3==1.0.0
That module (lbcapi3) is in PyPI, but it's only targeted for Python 3.x. When I run the pipeline in Dataflow I get:
ERROR: Could not find a version that satisfies the requirement lbcapi3==1.0.0 (from -r requirements.txt (line 27)) (from versions: none)\r\nERROR: No matching distribution found for lbcapi3==1.0.0 (from -r requirements.txt (line 27))\r\n'
That makes me think that the Dataflow worker is running the pipeline with Python 2.x to install the dependencies in requirements.txt.
Is there a way to specify the Python version to use by a Goggle Dataflow pipeline (the workers)?
I tried by adding this as the first line of the my file api-etl.py, but didn't work:
#!/usr/bin/env python3
Thanks!
Follow the instructions in the quickstart to get up and running with your pipeline. When installing the Apache Beam SDK, make sure to install version 2.16 (since this is the first version that officially supports Python 3). Please, check your version.
You can use the Apache Beam SDK with Python versions 3.5, 3.6, or 3.7 if you are keen to migrate from Python 2.x environments.
For more information, refer to this documentation. Also, take a look for preinstalled dependencies.
Edited, after providing additional information:
I have reproduced problem on Dataflow. I see two solutions.
You can use --extra_package option, which would allow staging local packages in an accessible way. Instead of listing local package in the requirements.txt, create a tarball of the local package (e.g. my_package.tar.gz) and use --extra_package option to stage them.
Clone the repository from Github:
$ git clone https://github.com/6ones/lbcapi3.git
$ cd lbcapi3/
Build the tarball with the following command:
$ python setup.py sdist
The last few lines will look like this:
Writing lbcapi3-1.0.0/setup.cfg
creating dist
Creating tar archive
removing 'lbcapi3-1.0.0' (and everything under it)
Then, run your pipeline with the following command-line option:
--extra_package /path/to/package/package-name
In my case:
--extra_package /home/user/dataflow-prediction-example/lbcapi3/dist/lbcapi3-1.0.0.tar.gz
Make sure, that all of required options are provided in the command (job_name, project, runner, staging_location, temp_location):
python prediction/run.py --runner DataflowRunner --project $PROJECT --staging_location $BUCKET/staging --temp_location $BUCKET/temp --job_name $PROJECT-prediction-cs --setup_file prediction/setup.py --model $BUCKET/model --source cs --input $BUCKET/input/images.txt --output $BUCKET/output/predict --extra_package /home/user/dataflow-prediction-example/lbcapi3/dist/lbcapi3-1.0.0.tar.gz
The error you faced, would disappear.
Second solution - posting the additional libraries that your app is using in setup.py file, refer to the documentation.
Create a setup.py file for your project:
import setuptools
setuptools.setup(
name='PACKAGE-NAME',
version='PACKAGE-VERSION',
install_requires=[],
packages=setuptools.find_packages(),
)
You can get rid of the requirements.txt file and instead, add all packages contained in requirements.txt to the install_requires field of the setup call.
The simple answer is when deploying your pipeline, you need to make sure that your local environment is on python 3.5 3.6 or 3.7. If it is, then the Dataflow worker will have the same version once your job is launched.

ANTLR3 python runtime not detected

I want to use fstpso package in python which needs ANTLR3 python runtime.
I downloaded antlr_python_runtime-3.1.3.tar.gz from http://www.antlr3.org/download/Python/ and ran the command sudo python setup.py install. The output of the command was
Installed /path/to/python/packages/antlr_python_runtime-3.1.3-py2.7.egg
But after this when I try to import fstpso module in python, it throws the error
The ANTLR3 python runtime was not detected; pyfuzzy cannot import FST-PSO's FLC files
I am using python 2.7.12 on linux.
Is there something I did wrong? Or I have to update any PATH in the environment?
Thanks for your help!!
I'm fst-pso main developer. In the last days I reimplemented the Sugeno reasoner from scratch, to finally remove the pufuzzy/ANTL3 dependency. I just uploaded the new package on PyPI.
Now you can pip install the new version of fst-pso (v 1.4.0); please let me know if that works correctly.

Getting the correct version of Pmw to install

Problem:
I'd like to install Pmw 2.0.0 (project page here) so that I can use it with tkinter in python3. The setup script from the package detects which version of python you're using and installs the version that is appropriate for your system (Ubuntu 15 in my case). I can't find any references to switches to make it install the 2.0.0 instead of 1.3.3(the Python 2.7 version), nor have I been able to get the script to install to the python3 libraries.
What I've done so far:
I've changed the python version detector in the setup script from
if sys.version_info[0]<3:
version='2.0.0' # really '1.3.3'
packages=['Pmw', 'Pmw.Pmw_1_3_3', 'Pmw.Pmw_1_3_3.lib',]
to
if sys.version_info[0]<2:
version='2.0.0' # really '1.3.3'
packages=['Pmw', 'Pmw.Pmw_1_3_3', 'Pmw.Pmw_1_3_3.lib',]
to attempt to force the installer to default to the python3 version, which it does, but it installs them in the python2.7 libraries (/usr/local/lib/python2.7/distpackages).
What I want to do:
I'm looking for a way to force the installer to put the 3.4-compatible package into the python3 libraries. If that means getting it to install both packages in their respective correct directories, that's fine, too. I'm stumped about what to try next.
Answered by RazZiel on AskUbuntu:
Link here.
Instead of using the command sudo python setup.py build and then sudo python setup.py install, I should have been using python3 to execute the setup script. I've managed to outthink myself pretty badly on this one.

Verify thread-safety MySQLdb (Python) prior to Trac installation

I'm trying to install Trac manually for the first time. I don't want to use a one-click-installer like Bitmani, I want to learn how to install Trac manually, so I'm following the instructions carefully. I'm installing it in a Windows localhost for now, before installing it in a Linux environment.
As I follow the instructions carefully, I needed to install Python+MySQLDb, and I read this:
thread-safety is important
(...) verify that it is thread-safe by calling MySQLdb.thread_safe() from a standalone Python script (i.e., not under Apache). If the stand-alone test reports that MySQLdb is indeed thread-safe (...)
I've just installed MySQLDb 1.2.4 and I'd like to verify this. I've Googled but I haven't found an example about this, and I have no idea about Python. How can I verify if I've got a thread-safe installation?
Run this command. If you get 1 in the output, your installation is threadsafe.
python -c "import MySQLdb ; print MySQLdb.thread_safe()"

ImportError running Google's python appengine on Ubuntu

I'm trying to teach myself python using Google's AppEngine, and I can't get the dev server running. I get this error:
Traceback (most recent call last):
File "/opt/google_appengine/google_appengine_1.2.7/dev_appserver.py",
line 60, in
run_file(file, globals()) File
"/opt/google_appengine/google_appengine_1.2.7/dev_appserver.py",
line 57, in run_file
execfile(script_path, globals_) File
"/opt/google_appengine/google_appengine_1.2.7/google/appengine/tools/dev_appserver_main.py",
line 65, in
from google.appengine.tools import os_compat ImportError: cannot import
name os_compat
Ubuntu 9.10 comes with python2.6 (didn't work), and I installed python2.5 (didn't work), and have tried running it with python dev_appserver.py helloWorld (didn't work) as well as running dev_appserver.py after editing the first line to be:
#!/usr/bin/env python2.5
I can't seem to find anything online with this error. The only problem I've found is about using python 2.5, and I think I've solved that.
Kyle suggested I need to set my PYTHONPATH variable. After running
export PYTHONPATH=/opt/google_appengine/google_appengine_1.2.7
I still get the same error trying to run dev_appserver.py. Am I setting PYTHONPATH wrong? Alternatively, how do I uninstall the protocol buffers python project? I have no use for Ubuntu One and had already uninstalled it.
The problem appears to be the fact that Karmic Koala 9.10 (the latest version of Ubuntu) ships with Ubuntu One, a python app that depends on Google's protocol buffers library. The python-protobuf package provides the google.protobuf package in /usr/lib/pymodules/python2.6.
Unfortunately, the AppEngine SDK includes another package called google.appengine. So somewhere in your code, the google package is being imported, and the package that contains protobuf is being found on PYTHONPATH first. Python caches the first package it finds in sys.modules, so the second google package in the SDK will never be imported.
You could move the google AppEngine SDK up to the front of your PYTHONPATH. That should ensure that Python finds the google.appengine package instead of the package provided by python-protobuf.
PYTHONPATH=/opt/google_appengine/google_appengine_1.2.7 \
python dev_appserver.py helloWorld
This is a bug that should be reported to the AppEngine SDK project.
Update: I've submitted a bug against the AppEngine API.
It was a file permission problem. os_compat.py wasn't readable by user, just by root. I'm not sure if I screwed this up, or if the permissions by default don't have read-all, but that was the fix.
I hate to accept my own answer after Kyle gave such a good response, but I don't need the $PYTHONPATH fix to make it work now that I did sudo chown -R +r /opt/google_appengine/google_appengine_1.2.7
With that error, Python is saying that it can't find or read the name that it's trying to import. Since the import of os_compat is the very first executable line of AppEngine's dev_appserver.py, I suspect that there's a problem with the way that your paths are configured.
The latest version of Ubuntu (10.10) has also removed Python 2.5 - making it a pain to install the App Engine development environment.
I (finally) got my environment working (including using App Engine Helper for unit testing). I built this bash script which might be useful to others. It installs:
sqlite
libsqlite
pep8
mock
OpenSSL
Python 2.5.2
Python SSL Library
Django 1.1 (latest version in production)
App Engine
App Engine Helper
http://pageforest.googlecode.com/hg/tools/pfsetup
Ubuntu 11.04 comes with python 2.6 as the default version. It is suggested to use Google app engine with version 2.5. I am using it though for many years with python 2.6 without any issues.
What you need to do in order to execute it smoothly with python 2.6 is to edit google/appengine/tools/dev_appserver.py and add these three lines
'_counter',
'_fastmath',
'strxor',
after 'XOR', and before '_Crypto_Cipher__AES', around line ~1350.
If you are now using Google Cloud SDK, put this into ~/.profile.
export CLOUDSDK_ROOT_DIR="/path/to/google/cloud/sdk/"
export APPENGINE_HOME="${CLOUDSDK_ROOT_DIR}/platform/appengine-java-sdk"
export GAE_SDK_ROOT="${CLOUDSDK_ROOT_DIR}/platform/google_appengine"
# The next line enables Java libraries for Google Cloud SDK
export CLASSPATH="${APPENGINE_HOME}/lib":${CLASSPATH}
# The next line enables Python libraries for Google Cloud SDK
export PYTHONPATH=${GAE_SDK_ROOT}:${PYTHONPATH}
# * OPTIONAL STEP *
# If you wish to import all Python modules, you may iterate in the directory
# tree and import each module.
#
# * WARNING *
# Some modules have two or more versions available (Ex. django), so the loop
# will import always its latest version.
for module in ${GAE_SDK_ROOT}/lib/*; do
if [ -r ${module} ]; then
PYTHONPATH=${module}:${PYTHONPATH}
fi
done
unset module
Do not put inside ~/.bashrc because, every time you open a bash session, all those modules will be added again and again into your PYTHONPATH environment variable.

Categories