I am building a Django project and in Openshift I have an app with the cartridge for Python 2.7 and Mysql 5.5. I also want to use bower to manage the client side packages, but bower has as dependencies npm and Node. In Openshift, I've got npm installed, but I don't have Node, so I can't install bower.
How can I install Nodejs in openshift?
Note: I don't have sudo permission in openshift.
Thanks.
The host environment provides access to npm and nodejs-0.6 - even if you've selected the python web service.
If you want to minimize your repo contents, and use OpenShift to run your builds remotely, I'd try using action_hooks to provide your own custom build steps.
You could also consider running your builds locally, and committing and shipping your build results, possibly via an alternate "release" or "build" branch.
Related
In my Az Function app, I have some ubuntu packages like Azure CLI and Kubectl that I need to install on the AZ Host whenever it starts a new container. I have already tried Start-up Commands and also going into the Bash. The former doesnt work and the latter tells me permission is denied and resource is locked. Is there any way to install these packages on function start-up in Azure Functions?
If you try to install the package via bash, it is impossible and will not be dealt with at all. The reason is because when you use python to write functions and deploy them to linux os on azure, in fact it installs various packages according to requirements.txt, and finally merges these packages into a whole. When you run the function on azure, you are based on this whole package. Therefore, if it is incorrect to try to install the package after deployment, you should specify the package to be installed in requirements.txt before deployment and then deploy to azure.
I am trying to deploy a Flask app to an Azure Web App (Linux, python3.7 runtime) using FTP.
I copied the "application.py" over and a "requirements.txt", but I can see in the logs that nothing is being installed.
The Web App is using an 'antenv' virtual environment but it won't install anything. How do I add libraries to this 'antenv' virtual environment?
Yes, I see that you have resolved the issue. You must use Git to deploy Python apps to App Service on Linux so that your dependencies in requirements.txt are installed (root folder).
To install Django and any other dependencies, you must provide a requirements.txt file and deploy to App Service using Git.
The antenv folder is where App Service creates a virtual environment with your dependencies. If you expand this node, you can verify that the packages you named in requirements.txt are installed in antenv/lib/python3.7/site-packages. Refer this document for more details.
Additionally, Although the container can run Django and Flask apps automatically, provided the app matches an expected structure, you can also provide a custom startup command file through which you have full control over the Gunicorn command line. A custom startup command is typically required for Flask apps, but not Django apps.
Turns out I had to run these commands and do a git push while my local venv was activated. At that point I saw azure start downloading all the libraries in my requirements.txt
I've developed and tested a dash app. It works as expected. Next step is to deploy the app to AWS Elastic Beanstalk using a preconfigured Docker container.
I am currently trying to set up a local docker environment for testing as described here
Running the command (via PowerShell):
docker build -t dash-app -f Dockerfile.
successfully downloads the preconfigured image, then proceeds to install python modules as specified in requirements.txt, until it gets to the cryptography module, where it throws a runtime error saying it requires setuptools version 18.5 or newer.
My Dockerfile has this line in it:
FROM amazon/aws-eb-python:3.4.2-onbuild-3.5.1
I've tried adding a line to the dockerfile to force upgrade pip and setuptools within the container as suggested here and here, but nothing seems to work.
I've built a web app using django deployed on openshift. I'm trying to add the third party reusable app markdown-deux. I've followed the install instructions (used pip) and it works fine on the localhost development server.
I've added 'markdown_deux' to my settings.py and tried it with and without a requirements.txt. However, I still get a 500 error and from rhc tail the error "Import error: no module named markdown_deux".
I've tried restarting my app and resyncing the db but I'm still getting the same errors. I've RTFM but to no avail.
Openshift has mechanisms to automatically check and add dependencies after each git push, depending on your application type. So you don't need to install dependencies manually.
For python applications modify the projects setup.py.
Python application owners should modify setup.py in the root of the git repository with the list of dependencies that will be installed using easy_install. The setup.py should look something like this:
from setuptools import setup
setup(name='YourAppName',
version='1.0',
description='OpenShift App',
author='Your Name',
author_email='example#example.com',
url='http://www.python.org/sigs/distutils-sig/',
install_requires=['Django>=1.3', 'CloudMade'],
)
Read all details at the Openshift Help Center.
You've used pip to install it locally, but you actually need to install it on your server as well. Usually you would do that by adding it to the requirements.txt file and ensuring that your deployment process includes running pip install -r requirements.txt on the server.
I have django website in testing server and i am confused with how should the deployement procedure goes.
Locally i have these folders
code
virtualenv
static
static/app/bower_components
node_modules
Current on git i only have code folder in there.
My Initial thought was to do this on production server
git clone repo
pip install
npm install
bower install
colectstatic
But i had this problem that sometimes some components in pip or npm or bowel fail to install and then production deployemnet fails.
I was thinking of put everything in static, bower, npm etc inside git so that i can fetch all in prodcution.
Is that the right way to do. i want to know the right way to tackle that problem
But i had this problem that sometimes some components in pip or npm or
bowel fail to install and then production deployment fails.
There is no solution for this other than to find out why things are failing in production (or a way around would be to not install anything in production, just copy stuff over).
I would caution against the second option because Python virtual environments are not designed to be portable. If you have components such as PIL/Pillow or database drivers, these need system libraries to be installed and compiled against at build time.
Here is what I would recommend, which is in-line with the deployment section in the documentation:
Create an updated requirements file (pip freeze > requirements.txt)
Run collectstatic on your testing environment.
Move the static directory to your frontend/proxy machine, and map it to STATIC_URL. Confirm this works by browsing the static URL (for example: http://example.com/static/images/logo.png)
Clone/copy your codebase to the production server.
Create a blank virtual environment.
Install dependencies with pip install -r requirements.txt
Make sure you run through the deployment checklist, which includes security tips and settings you need to enable for production.
After this point, you can bring up your django server using your favorite method.
There are many, many guides on deploying django and many are customized for particular environments (for example, AWS automation, Heroku deployment tips, Digital Ocean, etc.) You can browse those for ideas (I usually pick out any automation tips) but be careful adopting one strategy without making sure it works with your particular environment/requirements.
In addition this might be helpful for some guidelines on deployment.