Odoo service is started by systemctl start odoo. I am usin Centos. When I want to update my changed *.py code I used to do like this:
1. systemctl stop odoo
Then I update my module and database by useing this:
2. ./odoo.py -c openerp-server.conf -u <my_module_name> -d <database_name>
3. stop service by ctrl + c
4. systemctl start odoo
But it's realy long and uncomfortable way to update changes.
Is there a shorter way to do the same operations in shorter way?
Odoo with Service
You can make changes like this:
Stop the server: systemctl stop odoo
Start the server: systemctl start odoo. Here the .py are updated
If you also need to update xml or some translations you can press the Update button on the Odoo interface, on the module description form.
Note: There are modules to reload specific xml views. If you are interested in it I can take a look to check if I find one.
Odoo without Service
If you are developing on your local computer, you don´t need to use systemctl. Just run Odoo directly with odoo.py and you can see the changes immediately:
./odoo.py -c openerp-server.conf -u <my_module_name> -d <database_name>
Autoreload Python Files
There is another option to reload python files when they have changed. Check this other answer:
Normally if you change your python code means, you need to restart the
server in order to apply the new changes.
--auto-reload parameter is enabled means, you don't need to restart the server. It enables auto-reloading of python files and xml files
without having to restart the server. It required
pyinotify. It is a Python
module for monitoring filesystems changes.
Just add --auto-reload in your configuration file. By default the
value will be "false". You don't need to pass any extra arguments.
--auto-reload is enough. If everything setup and works properly you will get
openerp.service.server: Watching addons folder /opt/odoo/v8.0/addons
openerp.service.server: AutoReload watcher running in the server log. Don't forget to install pyinotify package.
But in odoo 10 just add --dev=reload paramter
--dev=DEV_MODE Enable developer mode. Param: List of options
separated by comma. Options : all,
[pudb|wdb|ipdb|pdb], reload, qweb, werkzeug, xml
Failed to stop odoo.service: Access denied. See system logs and 'systemctl status odoo.service' for details.
Initially, I got this error. Then tried with su and got this.
Failed to stop odoo.service: Unit odoo.service not loaded.
My concern is how to identify the perfect file through which I can restart the Odoo services post new module installation.
Related
This may be a sort of 101 question, but in setting this up for the first time there are no hints about such a fundamental and common task. Basically I have a headless ubuntu running as a docker image inside AWS, which gets built via github actions CI/CD. All is running well.
Inside ubuntu I have some python scripts, let's say a custom server, cron jobs, some software running etc. How can I know, remotely, if there were any errors logged by any of these? Let's keep it simple: How can I print an error message, from a python server inside ubuntu, that I can read from outside docker? Does AWS have any kind of web interface for viewing stdout/stderr logs? Or at least an ssh console? Any examples somewhere?
Furthermore, I've set up my docker with healthchecks, to confirm that my servers running inside ubuntu are online and serving. Those work because I can test them in localhost by doing docker ps and shows Status 'healthy'. How do I see this same thing when live in AWS?
Have I really missed something this big? It feels like this should be the first thing flashing on the main page of setting up a docker on AWS.
There's a few things to unpack here, that you learn after digging through a lot of stuff you don't need in order to get started, just so you can know how to get started.
Docker will log by default the output of the startup processes that you've described your dockerfile setup, e.g. when you do ENTRYPOINT bash -C /home/ubuntu/my_dockerfile_sh_scripts/myStartupScripts.sh. If any subproceses spawned by those processes also log to stdout/stderr, the messages should bubble up to the host process, and therefore be shown in the docker log. If they don't bubble, look up subprocess stdout/stderr in linux.
Ok we know that, but where the heck is AWS's stats and logs page? Well in Amazon Cloudwatch™ of course. Didn't you already know about that term? Why, it says so right there when you create a docker, or on your ECS console next to your docker Clusters, or next to your running docker image Service. OH WAIT! No, no it does not! There is no utterance of "Cloudwatch" anywhere. Well there is this one page that has "Cloudwatch" on it, which you can get to if you know the url, but hey look at that, you don't actually see any sort of logs coming from your code in docker anywhere on there so ..yeah. So where do you see your actual logs and output? There is this Logs tab, in your Service's page (the page of the currently running docker image): https://eu-central-1.console.aws.amazon.com/ecs/home?region=eu-central-1#/clusters/your-cluster-name/services/your-cluster-docker-image-service/logs. This generically named and not-described tab, shows a log not of some status for the service, from the AWS side, but actually shows you the docker logs I mentioned in point 1. Ok. How do I view this as a raw file or access this remotely via script? Well I don't know. I guess you'll find out about that basic common task, after reading a couple of manuals about setting up the AWS CLI (another thing you didn't know existed).
Like I said in point 1, docker cannot log generic operating system log messages, or show you log files generated by your server, or just other software or jobs that are running that weren't described and started by your dockerfile/config. So how do we get AWS to see that? Well, It's a bit of a pain in the ass, you have to either replace your docker image's default OS's (e.g. ubuntu) logging driver with sudo yum install -y awslogs and set that up, or you can create symbolic links between specific log files and the stdout/stderr stream (docker docs mention of this). Also check Mark B's answer. But probably the easiest thing is to write your own little scripts with short messages that print out to the main process what's the status of things. Usually that's all you need unless you're an enterprise.
Is there any ssh or otherwise an AWS online command line interface page into the running docker, like you get in your localhost docker desktop? So you could maybe cd and ls browse or search for files and see if everything's fine? No. Make your own. Or better yet, avoid needing that in the first place, even though it's inconvenient for R&D.
Healthchecks. Where the heck do I see my docker healthchecks? The equivalent to the localhost method of just running the docker ps command. Well by default there aren't any healthchecks shown anywhere on AWS. Why would you need healthchecks anyway? So what if your dockerfile has HEALTHCHECKs defined?..🙂 You have to set that up in Fargate™ (..whatever fargate even means cause the name's not written anywhere ("UX")). You have to create what is called a new Task Definition Revision. Go to your Clusters in Amazon ECS. Go to your cluster. Then you click on your Service's entry in the Task Definition column of the services table on the bottom. You click on Create New Revision (new task definition revision). On the new page you click on your container in the Container Definitions table. On the new page you scroll down to HEALTHCHECK, bingo! Now what is this? What commands to I paste in here? It's not automatically taking the HEALTHCHECK that I defined in my dockerfile, so does that mean I must write something else here?? What environment are the healthchecks even run in? Is it my docker? Is it linux? Here's the answer: you paste in this box, what you already wrote in your dockerfile's HEALTHCHECK. Just use http://127.0.0.1 (localhost) as you would in your local docker desktop testing environment. Now click Update. Click Create. K, now we're still not done. Go back to your Amazon ECS / Clusters / cluster. Click on your service name in the services table. Click Update. Select the latest Revision. Check "force new deployment". Then keep clicking Next until finally you click Update Service. You can also define what triggers your image to be shut down on healthcheck fail. For example if it ran out of ram. Now #Amazon, I hope you take this answer and staple it to your shitty ass ECS experience.
I swear the relentlessly, exclusively bottom-up UX of platforms like AWS and Azure are what is keeping the tutorial blogger industry alive.. How would I know what AWS CloudWatch is, or that it even exists? There are no hints about these things anywhere while you set up. You'd think the first thing that flashes on your screen after you completed a docker setup would be "hey, 99.9% of people right now need to set up logging. You should use cloudwatch. And here's how you connect healthchecks to cloudwatch". But no, of course not..! 🙃
Instead, AWS's "engineer" approach here seems to be: here's a grid of holes in the wall, and here's a mess of wires next to it in a bucket. Now in order to do the common frequently done tasks you want to do, you must first read the manual for each hole, and the manual for each wire in the bucket, then find all of the holes and wires you need, and plug them in the right order (and for the right order you need to find a blog post because that always involves some level of not following the docs and definitely also magic).
I guess it's called "job security" for if you're an enterprise server engineer :)
I faced the same issue, I found the AWS Wiki, the /dev/stdout symbolic link doesn't work to me, but /proc/1/fd/1 symbolic link works to me.
Here is the solution:
Step 1. Add those commands to your Dockerfile.
# forward logs to docker log collector
RUN ln -sf /proc/1/fd/1 /var/log/console.log \
&& ln -sf /proc/1/fd/2 /var/log/error.log
Step 2. refer to "Mark B"'s step2.
Step 1. Update your docker image by deleting all the log files you care about, and replacing them with symbolic links to stdout or stderr, for example to capture logs in an nginx container I may do the following in the Dockerfile:
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log
Step 2. Configure the awslogs driver in the ECS Task Definition, like so:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "my-log-group",
"awslogs-region": "my-aws-region",
"awslogs-stream-prefix": "my-log-prefix"
}
}
And as long as you gave the ECS Execution Role permission to write to AWS Logs, log data will start appearing in CloudWatch Logs.
I'm trying to start and stop services from a python script that is running using Flask and Apache.
To get the status from memcached, for example, I'm using
os.popen('service memcached status').read() and works like a charm.
The problem is that when I try to start/stop doing something like
os.popen('service memcached stop').read() it just does nothing (I checked by the shell that the service is still running)
To summarize, I can get the status but can't start/stop and don't know why its happens.
Does anyone have any suggestion?
Thanks,
I saw the apache logs in /var/log/apache2/error.log and the problem was that I needed more privileges to execute start/stop. But when I tried to use
os.popen('sudo service memcached stop').read()
I got an error, saying that I should have typed the su password.
To solve this problem I typed in the shell:
visudo
which opened the /etc/sudoers file. And there I added the line
www-data ALL=(ALL) NOPASSWD:ALL
I understood that this means that I am giving permission to the user www-data execute sudo without password.
To quit, press Ctrl+X and then y to save.
note: www-data is the username that executes the apache.
Up to now I followed this advice to reload the code:
https://code.google.com/archive/p/modwsgi/wikis/ReloadingSourceCode.wiki
This has the drawback, that the code changes get detected only every N second. I could use N=0.1, but this results in useless disk IO.
AFAIK the inotify callback of the linux kernel is available via python.
Is there a faster way to detect code changes and restart the wsgi handler?
We use daemon mode on linux.
Why code reload for mod_wsgi at all
There is interest in why I want this at all. Here is my setup:
Most people use "manage.py runserver" for development and some other wsgi deployment for for production.
In my context we have automated the creation of new systems and prod and development systems are mostly identical.
One operating system (linux) can host N systems (virtual environments).
Developers can use runserver or mod_wsgi. Using runserver has the benefit that it's easy for debugging, mod_wsgi has the benefit that you don't need to start the server first.
mod_wsgi has the benefit, that you know the URL: https://dev-server/system-name/myurl/
With runserver you don't know the port. Use case: You want to link from an internal wiki to a dev-system ....
A dirty hack to get code reload for mod_wsgi, which we used in the past: maximum-requests=1 but this is slow.
Preliminaries.
Developers can use runserver or mod_wsgi. Using runserver has the
benefit that you it easy for debugging, mod_wsgi has the benefit that
you don't need to start the server first.
But you do, the server needs to be setup first and that takes a lot of effort. And the server needs to be started here as well though you can configure it to start automatically at boot.
If you are running on port 80 or 443 which is usually the case, the server can be started only by the root. If it needs to be restarted you will have to ask the super user's help again. So ./manage.py runserver scores heavily here.
mod_wsgi has the benefit, that you know the URL:
https://dev-server/system-name/myurl/
Which is no different from the dev server. By default it starts on port 8000 so you can access it as http://dev-server:8000/system-name/myurl/. If you wanted to use SSL with the development server you can use a package such as django-sslserver or you can put nginx in front of django development server.
With runserver you don't know the port. Use case: You want to link from >an internal wiki to a dev-system ....
With runserver, the port is well defined as mentioned above. And you can make it listen on a different port for exapmle with:
./manage.py runserver 0.0.0.0:9090
Note that if you put development server behind apache (as a reverse proxy) or NGINX, restarting problems etc that I have mentioned above do not apply here.
So in short, for development work, what ever you do with mod_wsgi can be done with the django development server (aka ./manage.py runserver).
Inotify
Here we are getting to the main topic at last. Assuming you have installed inotify-tools you could type this into your shell. You don't need to write a script.
while inotifywait -r -e modify .; do sudo kill -2 yourpid ; done
This will result in the code being reloaded when ...
... using daemon mode with a single process you can send a SIGINT
signal to the daemon process using the ‘kill’ command, or have the
application send the signal to itself when a specific URL is
triggered.
ref: http://modwsgi.readthedocs.io/en/develop/user-guides/frequently-asked-questions.html#application-reloading
alternatively
while inotifywait -r -e modify .; do touch wsgi.py ; done
when
... using daemon mode, with any number of processes, and the process
reload mechanism of mod_wsgi 2.0 has been enabled, then all you need
to do is touch the WSGI script file, thereby updating its modification
time, and the daemon processes will automatically shutdown and restart
the next time they receive a request.
In both situations we are using the -r flag to tell inotify to monitor subdirectories. That means each time you save a .css or .js file apache will reload. But without the -r flag changes to python code in subfolders will be undetected. To have the best of both worls, remove css, js, images etc with the --exclude directive.
What about when your IDE saves an auto backup file? or vim saves the .swp file? That too will cause a code reload. So you would have to exclude those file types too.
So in short, it's a lot of hard work to reproduce what the django development server does free of charge.
You can use inotify hooktables to run any command you want depending on a i-notify signal (here's my source link: http://terokarvinen.com/2016/when-files-change-take-action-inotify-hookable).
After looking the tables you can just reload the code of apache.
For your specific problem, it should be something like:
inotify-hookable --watch-directories sources/ --recursive --on-modify-command './code_reload.sh'
In the previous link, the command to execute was just a simple touch flask/init.wsgi
So, the whole code (adding ignored files was):
inotify-hookable --watch-directories flask/ --recursive --ignore-paths='flask/init.wsgi' --on-modify-command 'touch flask/init.wsgi'
As stated here: Flask + mod_wsgi automatic reload on source code change, if you have enabled WSGIScriptReloading, you can just touch that file. It will cause the entire code to reload (not just the config file). But, if you prefer, you can set any other script to reload the code.
After googling a bit, it seems to be a pretty standard solution for that problem and I think that you can use it for your application.
I made a flask app following flask's tutorial. After python flaskApp.py, how can I stop the app? I pressed ctrl + c in the terminal but I can still access the app through the browser. I'm wondering how to stop the app? Thanks.
I even rebooted the vps. After the vps is restated, the app still is running!
CTRL+C is the right way to quit the app, I do not think that you can visit the url after CTRL+C. In my environment it works well.
What is the terminal output after CTRL+C? Maybe you can add some details.
You can try to visit the url by curl to test if browser cache or anything related with browser cause this problem.
Since you are using apache so in order to stop your app, you have to disable it by deleting .conf file from '/etc/apache2/sites-enabled/' folder and then restart the apache server. This will surely destroy your current running session.
$ cd /etc/apache2/sites-enabled/
$ sudo rm conf_filename.conf
$ sudo service apache2 restart
Try it and your site will be down. To enable it again, copy paste your file to '/etc/apache2/sites-available/' and run the following commands to enable it again.
$ sudo a2ensite conf_filename.conf
$ sudo service apache2 restart
Now your site will be live again.
Have you tried pkill python?
WARNING: do not do so before consulting your system admin if are sharing a server with others.
So I am running Selenium on a Ubuntu Server VM and have a minor issue. When I start-up my VM and run a Selenium test script I get this error: selenium.common.exceptions.WebDriverException: Message: 'The browser seems to have exited before we could connect'. Now if I execute this export DISPLAY=:99 in the terminal before I run any of my Selenium test scripts all works fine. All tests run great headlessly!
My questions is do any of you know how to execute this command on start-up. So I don't have to run this in the terminal before I run my Selenium test scripts. I've tried adding it to the /etc/rc.local file. But this doesn't seem to work.
I've also tried executing it at the beginning of my Selenium test scripts. By just adding this (I'm using python)
os.system("export DISPLAY=:99")
Any suggestions as to how to accomplish this?
Thanks in advance
This isn't going to work:
os.system("export DISPLAY=:99")
Because system() starts a new shell and the shell will close when finished, this influences the environment of exactly one process that is very short lived. (Child processes cannot influence the environments of their parents. Parents can only influence the environment of their children, if they make the change before executing the child process.)
You can pick a few different mechanisms for setting the DISPLAY:
Set it in the scripts that start your testing mechanism
This is especially nice if the system might do other tasks, as this will influence as little as possible. In Python, that would look like:
os.environ["DISPLAY"]=":99"
In bash(1), that would look like:
export DISPLAY=:99
Set it in the login scripts of the user account that runs the tests.
This is nice if the user account that runs the tests will never need a DISPLAY variable. (Though if a user logs in via ssh -X testinguser#machine ... this will clobber the usual ssh(1) X session forwarding.)
Add this to your user's ~/.bashrc or ~/.profile or ~/.bash_profile. (See bash(1) for the differences between the files.)
export DISPLAY=:99
Set it at login for all users. This is nice if multiple user accounts on the system will be running the testing scripts and you just want it to work for all of them. You don't care about users ever having a DISPLAY for X forwarding.
Edit /etc/environment to add the new variable. The pam_env(8) PAM module will set the environment variables for all user accounts that authenticate under whichever services are configured to use pam_env(8) in the /etc/pam.d/ configuration directory. (This sounds more complicated than it is -- some services want authenticated users to have environment variables set, some services don't.)