dcos cassandra subcommand error - python

Can't seem to install the Cassandra package, marathon get's stuck in deployment in phase 1/2 and dcos cassandra subcommand issues the following stacktrace, any help appreciated.
Traceback (most recent call last):
File "/home/azureuser/.dcos/subcommands/cassandra/env/bin/dcos-cassandra", line 5, in <module>
from pkg_resources import load_entry_point
File "/opt/mesosphere/lib/python3.4/site-packages/pkg_resources.py", line 2701, in <module>
parse_requirements(__requires__), Environment()
File "/opt/mesosphere/lib/python3.4/site-packages/pkg_resources.py", line 572, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: requests
Python version: Python 3.4.2
requests version : 1.8.1

I'm on the team that's building the Cassandra service. Thanks for trying it out!
We've just updated the Cassandra CLI package to better define its pip dependencies. In your case it looks like it was trying to reuse an old version of the requests library? To kick your CLI's Cassandra module to the latest version, try running dcos package uninstall --cli cassandra; dcos package install --cli cassandra. Note that the --cli is important; omitting it can result in uninstalling the Cassandra service itself, while all we want is to reinstall the local CLI module.
Keep in mind that you should also be able to access the Cassandra service directly over HTTP. The CLI module is effectively a thin interface around the service's HTTP API. For example, curl -H "Authorization:token=$(dcos config show core.dcos_acs_token)" http://<your-dcos-host>/service/cassandra/v1/plan | jq '.'. See the curl examples in the Cassandra 1.7 docs for other endpoints.
Once you've gotten the CLI up and running, that should give more insight into the state of the service, but logs may give more thorough information, particularly if the service is failing to start. You can access the service logs directly by visiting the dashboard at http://<your-dcos-host>/:
Click Services on the left, then select marathon from the list. The Cassandra service manager is run as a Marathon task.
A panel will come up showing a list of all tasks being managed by Marathon. Click cassandra on this list to show its working directory, including the available log files.
When hovering over files, a magnifying glass will appear. Click a magnifying glass to display the corresponding file in-line.

Unfortunately we're still having the same problem, though we've managed to get a workaround. It seems there are more than one distinct issues with DC/OS on Azure, anyway I'll provide further feedback. If using the Marketplace version of DC/OS 1.7.0, Cassandra doesn't deploy, it get's stuck in Marathon on phase 1/2, upon inspection of the logs it seems to have a problem with accessing the default ports.
Pastebin to log file
On the other hand that problem doesn't appear on ACS DC/OS, Cassandra deploys correctly appearing in the DC/OS Service tab as well as on Marathon. The DCOS Cassandra CLI doesn't work on any. Upon a not very thorough inspection, it seems that when we installed DCOS CLI using the method above there are some issues with the dependencies specially taking into account the $PYTHONPATH variable
/opt/mesosphere/lib/python3.4/site-packages
We were able to solve the dependencies issue by taking two actions:
First Dependency issue was with requests module, which was solved with the following actions after installing cli for the Cassandra subcommand.
cd ~/.dcos/subcommands/cassandra
source env/bin/activate
pip install -Iv requests
We used -Iv since the usual update procedure fails with external dependency in $PYTHONPATH path, so requests dependency solved.
Second dependency which the cassandra subcommand was requiring was docopt, again by using the same method we were able to solve the issue and now the subcommand works as per the documentation
pip install -Iv docopt
This does seem a bit hackish, wondering if there's anything more appropriate to be done.
output of dcos cassandra connection after taking above steps
{
"address": [
"10.32.0.9:9042",
"10.32.0.6:9042",
"10.32.0.8:9042"
],
"dns": [
"node-0.cassandra.mesos:9042",
"node-1.cassandra.mesos:9042",
"node-2.cassandra.mesos:9042"
]
}
The same happens for other DC/OS subcommands like for example the Kafka one.

Related

Python flask saml throwing saml2.sigver.SigverError Error Message

Has anyone succesfully implemented flask-saml using Windows as dev environment, Python 3.6 and Flask 1.0.2?
I was given the link to the SAML METADATA XML file by our organisation and had it configured on my flask app.
app.config.update({
'SECRET_KEY': 'changethiskeylaterthisisoursecretkey',
'SAML_METADATA_URL': 'https://<url>/FederationMetadata.xml',
})
flask_saml.FlaskSAML(app)
According to the documentation this extension will setup the following routes:
/saml/logout/: Log out from the application. This is where users go
if they click on a “Logout” button.
/saml/sso/: Log in through SAML.
/saml/acs/: After /saml/sso/ has sent you to your IdP it sends you
back to this path. Also your IdP might provide direct login without
needing the /saml/sso/ route.
When I go to one of the routes http://localhost:5000/saml/sso/ I get the error below
saml2.sigver.SigverError saml2.sigver.SigverError: Cannot find
['xmlsec.exe', 'xmlsec1.exe']
I then went to this site https://github.com/mehcode/python-xmlsec/releases/tag/1.3.5 to get xmlsec and install it. However, I'm still getting the same issue.
Here is a screenshot of how I installed xmlsec
where does not seem to find the xmlsec.exe
documentationis asking to have xmlsec1 pre-installed. What you installed is a python binding to xmlsec1.
Get a windows build of xmlsec1 from here or build it from source
And make it available in the PATH.
xmlsec won't work properly in windows, better use Linux environment
Type the below command before giving pip install xmlsec
sudo apt-get install xmlsec1

How do I connect to an external Oracle database using the Python cx_Oracle package on Google App Engine Flex?

My Python App Engine Flex application needs to connect to an external Oracle database. Currently I'm using the cx_Oracle Python package which requires me to install the Oracle Instant Client.
I have successfully run this locally (on macOS) by following the Instant Client installation steps. The steps required me to do the following:
Make a directory called /opt/oracle
Create a symlink from /opt/oracle/instantclient_12_2/libclntsh.dylib.12.1 to ~/lib/
However, I am confused about how to do the same thing in App Engine Flex (instructions). Specifically, here's what I'm confused about:
The instructions say I should run sudo yum install libaio to install the libaio package. How do I do this on GAE Flex? Or is this package already available?
I think I can add the Instant Client files to GAE (a whopping ~100MB!), then set the LD_LIBRARY_PATH environment variable in app.yaml to export LD_LIBRARY_PATH=/opt/oracle/instantclient_12_2:$LD_LIBRARY_PATH. Will this work?
Is this even feasible without using custom Docker containers on App Engine Flex?
Overall I'm not sure if I'm on the right track. Would love to hear from someone who has managed this before :)
If any of your dependencies is not available in the base GAE flex images provided by Google and cannot be installed via pip (because it's not a python package or it's not available in PyPI or whatever other reason) then you can't use the requirements.txt file to get it installed in your GAE flex app.
The proper way to satisfy such dependencies would be to build your own custom runtime. From About Custom Runtimes:
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
Yes, that means providing a custom Docker file. In your particular case you'd be installing the Instant Client and libaio inside this Dockerfile. See also Building Custom Runtimes.
Answering your first question, I think that the instructions in the oracle website just show that you have to install said library for your application to work.
In the case of App engine flex, they way to ensure that the libraries are present in the deployment is with the requirements.txt textfile. There is a documentation page which does explain how to do so.
On the other hand, I will assume that "Instant Client Files" are not libraries, but necessary data for your App to run. You should use Google Cloud Storage to serve them, or any other alternative of Storage within Google Cloud.
I believe that, if this is all what you need for your App to work, pushing your own custom container should not be necessary.

Why am I getting the KeyError: 'CLOUD_STORAGE_BUCKET' from the default python-docs-samples/appengine/flexible/storage example

I am attempting to use Google Cloud Storage with Google App Engine and am currently looking at the “Using Cloud Storage” documentation page. It references the “Quickstart for Python in the App Engine Flexible Environment” project. I have pulled the “python-docs-samples/appengine/flexible/storage” from the Github and have followed the instructions regarding the virtualenv listed in the Quickstart.
When I run python main.py it results in and error:
File "main.py", line 27, in <module>
CLOUD_STORAGE_BUCKET = os.environ['CLOUD_STORAGE_BUCKET']
File [PATH_TO_FILE]/python-docs-samples/appengine/flexible/storage/env/bin/../lib/python2.7/Use
rDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'CLOUD_STORAGE_BUCKET'
I did provide the name of my bucket in the app.yml file
#[START env]
env_variables:
CLOUD_STORAGE_BUCKET: jcolumbetestbucket
#[END env]
Some areas of confusion I have:
Both the "Quickstart" and "Using Cloud Storage" projects it seems to want to use python 3, as listed in the app.yml files, but when I run the vurtualenv commands, it installs python2.7. I did do an install for python 3 via sudo pip3 install virtualenv and and run both python3 main.py and python main.py and still get the error.
Also this particular documentation says to use the python main.py command to run the local dev server, while others I have been looking at for the last few days says to use dev_appserver.py . command.
Any insight or help would be helpful, as I have been trying to get this to work for days.
There are two different environments you can build your application in: Standard and Flexible. For more on that see Choosing an App Engine Environment.
dev_appserver.py is a sandbox used for testing apps meant for the Standard environment which loads environment variables from app.yaml without a problem.
The example you are using is meant for the Flexible environment. According to the documentation there are multiple ways of running these but none of them seem to be able to load environment variables locally.
You have two options: either stick with the Standard environment if it meets your needs or hardcode your environment variables for testing purposes.

Is it a security issue to pin the version of the "certifi" package in requirements.txt?

I have a web service that uses the requests library to make https requests to a different, external service.
As part of my deployment process, whenever there's a change to the list of dependencies, I use pip freeze to regenerate the requirements.txt file, which is stored in my code repository and processed by my PaaS provider to set up the application environment.
Today, I noticed this line in my requirements.txt file:
certifi==14.05.14
That is, the certifi package is pinned down to a version that is no longer the latest.
Is this a security issue (does it mean that my trusted root certificates are not up-to-date)?
If so - what would be the best way to change my deployment process (which is, I think, fairly standard) to solve this issue?

How to 'pip install packages' inside Azure WebJob to resolve package compatibility issues

I am deploying a WebJob inside Azure Web App that uses Google Maps API and Azure SQL Storage.
I am following the typical approach where I make a WebJob directory and copy my 'site-packages' folder inside the root folder of the WebJob. Then I also add my code folder inside 'site-packages' and make a run.py file inside the root that looks like this:
import sys, os
sys.path.append(os.path.join(os.getcwd(), "site-packages"))
import aero2.AzureRoutine as aero2
aero2.run()
Now the code runs correctly in Azure. But I am seeing warnings after a few commands which slow down my code.
I have tried copying 'pyopenSSL' and 'requests' module into my site-packages folder, but the error persists.
However, the code runs perfectly on my local machine.
How can I find this 'pyopenSSL' or 'requests' that is compatible with the python running on Azure?
Or
How can I modify my code so that it pip installs the relevant packages for the python running on Azure?
Or more importantly,
How can I resolve this error?
#Saad,
If your webjob worked fine on Azure Web App, but you got inscuritywaring, I suggest you can try to disable the warning information via this configuration(https://urllib3.readthedocs.org/en/latest/security.html#disabling-warnings ).
Meanwhile,requests lib has some different with the high version, I recommend you refer to this document:
http://fossies.org/diffs/requests/2.5.3_vs_2.6.0/requests/packages/urllib3/util/ssl_.py-diff.html
And Azure web app used the Python 2.7.8 version which is lower than 2.7.9. So you can download the requests lib as version 2.5.3
According the doc referred in the warning message https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning:
Certain Python platforms (specifically, versions of Python earlier than 2.7.9) have restrictions in their ssl module that limit the configuration that urllib3 can apply. In particular, this can cause HTTPS requests that would succeed on more featureful platforms to fail, and can cause certain security features to be unavailable.
So the easiest way fix this warning, is to upgrade the python version of the Azure Web Apps. Login the Azure manager portal, change the python version to 3.4 in Application settings column:
As I test in webjob task to use requests module to request a "https://" url, and since upgrade python version to 3.4, there are no more warnings.
I followed this article and kind of 'pip installed' the pymongo library for my script. Not sure if it works for you but here are the steps:
Make sure you include the library name and version in the requirements.txt
Deploy the web app using Git. The directory should include at least
requirements.txt (only to install whatever is in requirements.txt in the virtual environment, which is shared with Web App in D:\home\site\wwwroot\env\Lib\site-packages)
add this block of code to the Python code you want to use in the WebJob zip file.
import sys
sitepackage = "D:\home\site\wwwroot\env\Lib\site-packages"
sys.path.append(sitepackage)

Categories