newrelic python agent issue - python

I have 3 standalone python applications in python Virtual env in mog_wsgi modein same server. I installed newrelic in 1st application python virtual environment and its showing fine in newrelic GUI page.
when i followed and installed same thing for 2nd and 3rd application in respective python Virtual env. but these 2 applications are not showing in Newrelic GUI application page.
For all 3 applications log file is not updating from starting.
Please help me out to configure and integrate multiple python apps which are hosted in a single server python env.
below are steps that i have followed.
centos OS
Install the New Relic Python agent in apps virtual env
source <virtual path>
pip install newrelic
pip freeze
Generate config file:
cd /etc/newrelic/
newrelic-admin generate-config <Licence Key> newrelic.ini
Validate the conf file:
newrelic-admin validate-config newrelic.ini
Configure the variables in conf file
logfile = log file name
loglevel=info
app_name = name
Integrate the appliactions mod_wsgi file with NewRelic:
Adding below in wsgi.py file
vi wsgi.py
import newrelic.agent
newrelic.agent.initialize('/etc/newrelic/newrelic.ini')
installed newrelic version is "newrelic==2.12.0.10"
Please help me.
Thank you,
subhani466#gmail.com

I know that you have posted this a while ago, but the solution I've found was to install newrelic pip install newrelic outside of virtual envs.
Anyone that is facing this issue just install new relic outside the virtual envs.

I'm a little unclear on your setup. If you have all three Python apps in the same virtual environment and you're using newrelic.ini for all three, then all three applications will report to the same UI listing. You need to either name the config files differently (newrelic.ini, newrelic2.ini, newrelic3.ini) and reference them likewise, or split the applications into separate virtual environments.
As for the log files not writing, it sounds like you don't have user rights to that directory. You can read more about this in the New Relic docs, here: https://docs.newrelic.com/docs/python/python-agent-logging
The path provided for 'log_file' should be writable to the user that
your application runs as. If using Apache/mod_wsgi that would usually
be the Apache user which has restricted access to the filesystem. You
might therefore need to create a special directory into which the log
file can be placed which is writable to the Apache user. Because the
current working directory of an application could be anything, it is
recommended that an absolute path and not a relative path be used.
If you need more help, please open a ticket with us at http://support.newrelic.com

It looks like you are using one config file for three applications.
They all have the same app_name.
Therefore, you have one APM displayed in the UI.
Remove the app_name line from newrelic.ini and pass a unique variable NEW_RELIC_APP_NAME to each app.

Related

how activate venv on production

I have a backend app based on Node.js. The code is writen on JavaScript, except on folders 'scripts' which it is on Python.
I have some external libraries installed (pandas, matplotlib...) to execute those scripts and I used the virtual env (venv) to use them it correctly.
However, I always need to activate 'source venv/bin/activate' before to execute them (when I am in localhost).
The problem is in production.
Is there any way to let them activate permanently on production? Or other extra software? I hosted these files on VPS in Hostinger, but on production I got some errors in scripts which has some libraries installed.
This 'activate' might be the problem.
To make the comments an answer:
In production, you'll be running your app proper with a service manager such as systemd (that makes sure it stays running). You can direct the service manager to directly use the venv's Python, e.g. /home/app/venv/bin/python myapp.py; you don't need the activate script.
To have the virtualenv automatically activated for ad-hoc use on the production server, you can use a .bashrc file, e.g. /home/app/.bashrc that includes source ~/venv/bin/activate.

Running move and delete commands on Heroku CLI

I've recently deployed an application to Heroku. Due to the constraints of the file system I'm looking to replace two css files within a Flask Package upon the application starting.
My main goal is to take a file from the app directory (the file is part of the git repo) and use it to replace a python package file located in the site packages directory.
I've tried to run the following from the Heroku CLI but nothing seems to happen.
heroku run mv ./bootstrap.css ./.heroku/python/lib/python3.6/site-packages/flask_bootstrap/static/css/bootstrap.css
I've also tried to remove the files from the site packages directory using RM but again nothing happens.
Could you please let me know if standard unix commands work on Heroku?
Fork and edit flask_bootstrap and add that to requirements.txt file like this.
Heroku using ephemeral filesystem. Whatever change you made to your filesystem only last until the dyno is restarted.
So my opinion is to change the file in your local repo and push to heroku again.
Source : https://help.heroku.com/K1PPS2WM/why-are-my-file-uploads-missing-deleted

Packaging Django code for deployment

I'm getting ready to move my Django project from my laptop to my server. What is the recommended way to do this? E.g., is there a Django command that will package everything up (and select the correct settings file for test vs prod servers) and create a zip or tar file that can be moved over to the server? Sort of like Ant for building Java projects.
I recommend using virtual environment for your Django project.
source bin/activate your virtual environment in your server would simulate the same setup as of your local.
In your project
List all your dependencies in requirements.txt and settings in my_settings.py apart from django settings.py
In your server
just pull/transfer your code via git or any other means and activate virtual environment.
pip install -r reuirements.txt and change any minor changes required in my_settings
Take care of your migrations and Db setup. You may have to run migrations if you are migrating to your server for the first time.
And thats it you are up and running.

Deploy Django project using wsgi and virtualenv on shared webhosting server without root access

I have a Django project which I would like to run on my shared webspace (1und1 Webspace) running on linux. I don't have root access and therefore can not edit apache's httpd.conf or install software system wide.
What I did so far:
installed squlite locally since it is not available on the server
installed Python 3.5.1 in ~/.localpython
installed virtualenv for my local python
created a virtual environment in ~/ve_tc_lb
installed Django and Pillow in my virtual environment
cloned my django project from git server
After these steps, I'm able to run python manage.py runserver in my project directory and it seems to be running (I can access the login screen using lynx on my local machine).
I read many postings on how to configure fastCGI environments, but since I'm using Django 1.9.1, I'm depening on wsgi. I saw a lot about configuring django for wsgi and virtualenv, but all examples required access to httpd.conf.
The shared web server is apache.
I can create a new directory in my home with a sample hello.py and it is working when I enter the url, but it is (of course) using the python provided by the server and not my local installation.
When I change the first line indicating which python version to use to my virtual environment (#!/path/to/home/ve_tc_lb/bin/python), it seems to use the correct version in the virtual environment. Since I'm using different systems for developing and deployment, I'm not sure whether it is a good idea to e.g. add such a line in my djangoproject/wsgi.py.
Update 2016-06-02
A few more things I tried:
I learned that I don't have access to the apache error logs
read a lot about mod_wsgi and django in various sources which I just want to share here in case someone needs them in the future:
modwsgi - IntegrationWithDjango.wiki
debug mod_wsgi installation (only applicable if you are root)
mod_wsgi configuration guide
I followed the wsgi test script installation here - but the wsgi-file is just displayed in my browser instead of beeing executed.
All in all it seems like my provider 1und1 did not install wsgi extensions (even though the support told me a week ago it would be installed)
Update 2016-06-12: I got a reply from support (after a week or so :-S ) confirming that they dont have mod_wsgi but wsgiref...
So I'm a bit stuck here - which steps should I do next?
I'll update the question regularly based on comments and remarks. Any help is appreciated.
Since your apache is shared, I don't expect you can change the httpd.conf but use instead your solution. My suggestion is:
If you have multiple servers you will deploy your project (e.g. testing, staging, production), then do the following steps for each deploy target.
In each server, create a true wsgi.py file which you will never put in versioning systems. Pretty much like you would do with a local_settings.py file. This file will be named wsgy.py since most likely you cannot edit the apache settings (since it is shared) and that name will be expected for your wsgi file.
The content for the file will be:
#!/path/to/your/virtualenv/python
from my_true_wsgi import *
Which will be different for each deploy server, but the difference will be, most likely, in the shebang line to locate the proper python interpreter.
You will have a file named my_true_wsgi to have it matching the import in the former code. That file will be in the versioning systems, unlike the wsgi.py file. The contents of such file is the usual contents of the wsgi.py on any regular django project, just that you are not using that name directly.
With this solution you can have several different wsgi files with no conflict on shebangs.
You'll have to use a webhost that supports Django. See https://code.djangoproject.com/wiki/DjangoFriendlyWebHosts. Personally, I've used WebFaction and was quite happy with it, their support was great and customer service very responsive.

How do I use multiple settings file in Django with multiple sites on one server?

I have an ec2 instance running Ubuntu 14.04 and I want to host two sites from it. On my first site I have two settings file, production_settings.py and settings.py (for local development). I import the local settings into the production settings and override any settings with the production settings file.
Since my production settings file is not the default settings.py name, I have to create an environment variable
DJANGO_SETTINGS_MODULE='site1.production_settings'
However because of this whenever I try to start my second site it says
No module named site1.production_settings
I am assuming that this is due to me setting the environment variable. Another problem is that I won't be able to use different settings file for different sites.
How do I start use two different settings file for two different websites?
Edit: I missed the apache tag on this so I've updated my answer accordingly
Your problem is when running your Django app, site2, the python interpreter does not know about any of the modules in site1 because it's not listed in your PYTHONPATH.
I would highly recommend doing a short amount of reading up on how PYTHONPATH works before you continue: http://www.stereoplex.com/blog/understanding-imports-and-pythonpath
Now, there are many ways to resolve this, but we'll cover 3:
Modify your apache virtualhost configuration for site2:
Be sure to read the docs here, first, for differences between mod_wsgi v1 and v2 but as you're running ubuntu 14.04, you should be using mod_wsgi v2 anyway.
https://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIPythonPath
<virtualhost *:80>
# your existing config directives
WSGIPythonPath /path/to/site1
</virtualhost>
This has the advantage of not modifying any of your application code and, potentially having invalid directories in your PYTHONPATH when running site2 on your development machine.
Append the path in your python code
Python sys.path - appending PYTHONPATH
In your site2.wsgi file, before you start your Django app, add the following code.
import sys
sys.path.append('/path/to/site1')
Simple, and works. This approach will also not cause you problems when moving between development (using manage.py runserver) and production.
Create a simlink
How to symlink a file in Linux?
Another simple choice is to simply simlink your production_settings.py into site2, and and then set your DJANGO_SETTTINGS_MODULE='site2.production_settings'
ln -s /path/to/site1/site1/production_settings.py /path/to/site2/site2/production_settings.py
If you're developing on a windows machine, this is probably the most problematic of the 3 approaches, so if you're pushing to the server using git or any other version control system, make sure to add your new simlink to your .gitignore file or your VCS's equivalent.

Categories