I have an EC2 instance associated with ElasticBeanstalk on which my flask app is deployed. I have implemented some rest APIs using flask, python.
I created another EC2 instance on Linux 2, on which installed MongoDB community edition. I noticed that it has a local ip mapped in /etc/mongod.conf file:
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
As per my understanding I need to map the private IP of EC2 instance - flask app to it:
# network interfaces
net:
port: 27017
bindIp: <private ip of EC2 with flask app>
So that I can access the MongoDB installed at this instance from the flask app.
# configuring mongo
application.config["MONGO_DBNAME"] = "my_db"
application.config["MONGO_URI"] = "mongodb://public_ip_of_mongodb:27017/my_app"
For some reasons, as soon as I edit the '/etc/mongod.conf file' mongod service starts failing:
mongod.service - MongoDB Database Server
Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-08-04 17:17:28 UTC; 1min 11s ago
Docs: https://docs.mongodb.org/manual
Process: 2019 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=48)
Process: 2015 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 2012 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
Process: 2010 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
Main PID: 1651 (code=exited, status=0/SUCCESS)
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: Starting MongoDB Database Server...
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal mongod[2019]: about to fork child process, waiting until server is ready for connections.
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal mongod[2019]: forked process: 2023
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: mongod.service: control process exited, code=exited status=48
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: Failed to start MongoDB Database Server.
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: Unit mongod.service entered failed state.
Aug 04 17:17:28 ip-172-31-5-131.ap-east-1.compute.internal systemd[1]: mongod.service failed.
Even if I revert the bindId to 127.0.0.1 it still fails.
Am I missing anything over here?
Related
I'm trying to run a Flask app through Apache. When I run
systemctl list-units --type=service
I see that the unit for this specific app, "SITENAME.service" has failed.
When I run
sudo systemctl status SITENAME.service
I
get an error saying:
Loaded: loaded (/etc/systemd/system/SITENAMEenv.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2023-01-01 21:26:25 UTC; 1min 1s ago
Process: 787 ExecStart=/home/ubuntu/SITENAME/SITENAMEenv/bin/uwsgi --ini SITENAMEenv.ini (code=exited, status=1/FAILURE)
Main PID: 787 (code=exited, status=1/FAILURE) Jan 01 21:26:25 ip-172-31-88-10 systemd[1]:
SITENAME.service: Main process exited, code=exited, status=1/FAILURE Jan 01 21:26:25 ip-172-31-88-10 systemd[1]: SITENAME.service: Failed with result 'exit-code'.
Where can I get more information on this failure? I can't tell if this is a python issue in the actual application code, or something else.
The site stopped working a little while ago, but there were no major changes to the code. The service that is not working has the description "uWSGI instance to serve SITENAME"
I have a brand new Fedora Server 36 minimal install and all it will run is KVM.
Did the install with dnf group install "Headless Virtualization" and restarted the server.
systemctl status libvirtd showed it was not running; i.e.
[root#dell-fedora-kvm ~]# systemctl status libvirtd
○ libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled)
Active: inactive (dead)
TriggeredBy: ○ libvirtd-tcp.socket
○ libvirtd-admin.socket
○ libvirtd-ro.socket
○ libvirtd-tls.socket
○ libvirtd.socket
Docs: man:libvirtd(8)
https://libvirt.org
so I started and enabled it with systemctl start libvirtd and systemctl enable libvirtd
restarted but it still doesn't start automatically. When I start it manually systemctl start libvirtd, this is what I get with status. Not sure this tells us where the issue could be.
root#dell-fedora-kvm ~]# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2022-07-03 16:32:23 AEST; 52s ago
TriggeredBy: ○ libvirtd-tcp.socket
● libvirtd-admin.socket
● libvirtd-ro.socket
○ libvirtd-tls.socket
● libvirtd.socket
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 857 (libvirtd)
Tasks: 21 (limit: 32768)
Memory: 44.8M
CPU: 480ms
CGroup: /system.slice/libvirtd.service
├─ 857 /usr/sbin/libvirtd --timeout 120
├─ 957 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
└─ 958 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
Jul 03 16:32:23 dell-fedora-kvm systemd[1]: Started libvirtd.service - Virtualization daemon.
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: started, version 2.86 cachesize 150
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset auth cryptoh>
Jul 03 16:32:24 dell-fedora-kvm dnsmasq-dhcp[957]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Jul 03 16:32:24 dell-fedora-kvm dnsmasq-dhcp[957]: DHCP, sockets bound exclusively to interface virbr0
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: reading /etc/resolv.conf
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: using nameserver 127.0.0.53#53
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: read /etc/hosts - 2 addresses
Jul 03 16:32:24 dell-fedora-kvm dnsmasq[957]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 03 16:32:24 dell-fedora-kvm dnsmasq-dhcp[957]: read /var/lib/libvirt/dnsmasq/default.hostsfile
However the service shuts down after about 2 mins...
[root#dell-fedora-kvm ~]# systemctl status libvirtd
○ libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Sun 2022-07-03 16:37:25 AEST; 6s ago
TriggeredBy: ○ libvirtd-tcp.socket
● libvirtd-admin.socket
● libvirtd-ro.socket
○ libvirtd-tls.socket
● libvirtd.socket
Docs: man:libvirtd(8)
https://libvirt.org
Process: 994 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=exited, status=0/SUCCESS)
Main PID: 994 (code=exited, status=0/SUCCESS)
Tasks: 2 (limit: 32768)
Memory: 31.7M
CPU: 330ms
CGroup: /system.slice/libvirtd.service
├─ 957 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
└─ 958 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
Jul 03 16:35:25 dell-fedora-kvm systemd[1]: Starting libvirtd.service - Virtualization daemon...
Jul 03 16:35:25 dell-fedora-kvm systemd[1]: Started libvirtd.service - Virtualization daemon.
Jul 03 16:35:25 dell-fedora-kvm dnsmasq[957]: read /etc/hosts - 2 addresses
Jul 03 16:35:25 dell-fedora-kvm dnsmasq[957]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 03 16:35:25 dell-fedora-kvm dnsmasq-dhcp[957]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Jul 03 16:37:25 dell-fedora-kvm systemd[1]: libvirtd.service: Deactivated successfully.
Jul 03 16:37:25 dell-fedora-kvm systemd[1]: libvirtd.service: Unit process 957 (dnsmasq) remains running after unit stopped.
Jul 03 16:37:25 dell-fedora-kvm systemd[1]: libvirtd.service: Unit process 958 (dnsmasq) remains running after unit stopped.
I have not made any changes to libvirtd.service. Any idea what's going on?
In Fedora 35, the libvirt default installation was switched to use virtqemud
https://fedoraproject.org/wiki/Changes/LibvirtModularDaemons
Furthermore, regardless of whether an install is configured to use libvirtd or virtqemud, there is no need for the services to be running at install time. They all make use of systemd socket activation, so they will start automatically whenever some application tries to use libvirt. They will stay running as long as application is connected, and shutdown 2 minutes after the last app disconnects (unless VMs are actively running).
I installed Nginx unit
systemctl status unit
● unit.service - NGINX Unit
Loaded: loaded (/lib/systemd/system/unit.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-09-02 15:33:30 JST; 4min 29s ago
Process: 2288597 ExecStart=/usr/sbin/unitd $DAEMON_ARGS (code=exited, status=0/SUCCESS)
Main PID: 2288600 (unitd)
Tasks: 3 (limit: 2282)
Memory: 5.6M
CGroup: /system.slice/unit.service
├─2288600 unit: main v1.25.0 [/usr/sbin/unitd]
├─2288613 unit: controller
└─2288614 unit: router
Sep 02 15:33:30 tk2-243-31156 systemd[1]: Starting NGINX Unit...
Sep 02 15:33:30 tk2-243-31156 unitd[2288597]: 2021/09/02 15:33:30 [info] 2288597#2288597 unit started
Sep 02 15:33:30 tk2-243-31156 systemd[1]: Started NGINX Unit.
Now,I try to first test
sudo curl --unix-socket /var/run/unit/control.sock http://localhost/
However it returns curl: (7) Couldn't connect to server
I guess on this service apache works ,so port 80 is used....
Is there any work around??
I've created a python script that collects info for a 5 minute period and then sends data to aws cloudwatch. The body of the script runs forever in a while True: block, and when I run it on its own using python3 to_cloud.py, it functions successfully.
I made a service for it as follows:
[Unit]
Description=Service for sending script data to cloudwatch
After=multi-user.target
Conflicts=getty#tty1.service
[Service]
Type=simple
ExecStart=/usr/bin/python3 /home/ubuntu/scripts/to_cloud.py
WorkingDirectory= /home/ubuntu/scripts
StandardInput=tty-force
[Install]
WantedBy=multi-user.target
When I start the service, it runs for the duration of the internal loop that collects the info, but then the following happens:
● to_cloud.service - Service for sending script data to cloudwatch
Loaded: loaded (/lib/systemd/system/to_cloud.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2021-06-23 16:53:44 UTC; 5s ago
Process: 191072 ExecStart=/usr/bin/python3 /home/ubuntu/scripts/to_cloud.py (code=exited, status=1/FAILURE)
Main PID: 191072 (code=exited, status=1/FAILURE)
Jun 23 16:52:43 ip-172-31-19-11 systemd[1]: Started Service for sending script data to cloudwatch.
Jun 23 16:53:44 ip-172-31-19-11 systemd[1]: to_cloud.service: Main process exited, code=exited, status=1/FAILURE
Jun 23 16:53:44 ip-172-31-19-11 systemd[1]: to_cloud.service: Failed with result 'exit-code'.
There are no logs in the journalctl so I'm wondering how to figure out what's going wrong. Thanks!
I am trying to deploy a Django App using Gunicorn + Python3 + Nginx. Before updating to python 3.6 everything was working. But after update I cant seem to create a gunicorn sock file. I use below script to run the Gunicorn.
#!/bin/bash
NAME=yogavidya #Name of the application (*)
DJANGODIR=/home/ytsejam/public_html/abctasarim/ # Django project directory (*)
SOCKFILE=/home/ytsejam/public_html/abctasarim/run/gunicorn.sock # we will communicate using this unix socket (*)
USE=ytsejam # the user to run as (*)
GROUP=webdata # the group to run as (*)
NUM_WORKERS=1 # how many worker processes should Gunicorn spawn (*)
DJANGO_SETTINGS_MODULE=yogavidya.settings.base # which settings file should Django use (*)
DJANGO_WSGI_MODULE=yogavidya.wsgi # WSGI module name (*)
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source /usr/bin/virtualenvwrapper.sh
source /home/ytsejam/.virtualenvs/yv_dev/bin/activate
workon yv_dev
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec /home/ytsejam/public_html/abctasarim/gunicorn --name=$NAME --workers=$NUM_WORKERS --env=DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE --pythonpath /home/ytsejam/public_html/abctasarim --user ytsejam --bind=unix:/home/ytsejam/public_html/abctasarim/run/gunicorn.sock yogavidya.wsgi:application
When I check the output message showing status service which runs script, behaviour changes time to time if it fails :
● yogavidya.service - Yogavidya gunicorn daemon
Loaded: loaded (/usr/lib/systemd/system/yogavidya.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2017-03-27 13:13:36 BST; 10min ago
Process: 14393 ExecStart=/home/ytsejam/public_html/abctasarim/gunicorn_start.sh (code=exited, status=1/FAILURE)
Main PID: 14393 (code=exited, status=1/FAILURE)
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: File "/home/ytsejam/.virtualenvs/yv_dev/lib/python3.6/site-packages/gunicorn/sock.py", line 117, in __init__
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: super(UnixSocket, self).__init__(addr, conf, log, fd=fd)
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: File "/home/ytsejam/.virtualenvs/yv_dev/lib/python3.6/site-packages/gunicorn/sock.py", line 32, in __init__
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: self.sock = self.set_options(sock, bound=bound)
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: File "/home/ytsejam/.virtualenvs/yv_dev/lib/python3.6/site-packages/gunicorn/sock.py", line 46, in set_options
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: if err[0] not in (errno.ENOPROTOOPT, errno.EINVAL):
Mar 27 13:13:36 ytsejam gunicorn_start.sh[14393]: TypeError: 'OSError' object is not subscriptable
Mar 27 13:13:36 ytsejam systemd[1]: yogavidya.service: Main process exited, code=exited, status=1/FAILURE
Mar 27 13:13:36 ytsejam systemd[1]: yogavidya.service: Unit entered failed state.
Mar 27 13:13:36 ytsejam systemd[1]: yogavidya.service: Failed with result 'exit-code'.
When I try to restart it , I get success message and output becomes
yogavidya.service - Yogavidya gunicorn daemon
Loaded: loaded (/usr/lib/systemd/system/yogavidya.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2017-03-27 13:25:55 BST; 1s ago
Main PID: 14590 (gunicorn)
CGroup: /system.slice/yogavidya.service
└─14590 /home/ytsejam/.virtualenvs/yv_dev/bin/python3 /home/ytsejam/public_html/abctasarim/gunicorn --name=yogavidya --workers=1 --env=DJANGO_SETTINGS_MODULE=yogavidya.settings.base --pythonpath /home
Mar 27 13:25:55 ytsejam systemd[1]: Started Yogavidya gunicorn daemon.
Mar 27 13:25:55 ytsejam gunicorn_start.sh[14590]: Starting yogavidya as ytsejam
But still I cant see the sock file inside "run" folder. How can I fix gunicorn script to start sock file ?
Thanks