I have a command service app start-demo requires me to type sudo service app start-demo in the command line.
I used sudo(service app start-demo) and sudo(sudo service app start-demo) but I still get
Warning: sudo() encountered an error (return code 1) while executing 'sudo service app start-demo'
I have no problem executing that as a command line in a terminal.
I am not sure if SADeprecationWarning: counts as a failure to fabric?
Thanks.
user#box:/var/lib/app$ fab kickstart
You are installing prereqs..........
### Install Prereqs for Populate ###
No hosts found. Please specify (single) host string for connection: localhost
[localhost] Login password:
### I am starting demo ###
[localhost] sudo: sudo service app start-demo
[localhost] out: Starting demo
Fatal error: sudo() encountered an error (return code 1) while executing 'sudo service app start-demo'
Aborting.
Disconnecting from localhost... done.
the code
def pserve():
print '### I am starting demo ###'
#with settings(warn_only=True):
sudo('sudo service app start-demo')
#sudo('service app start-demo')
either sudo command will fail.
/etc/sudoers
# /etc/sudoers
#
# This file MUST be edited with the 'visudo' command as root.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL) ALL
# Allow members of group sudo to execute any command after they have
# provided their password
# (Note that later entries override this, so you might need to move
# it further down)
%sudo ALL=(ALL) ALL
#
#includedir /etc/sudoers.d
# Members of the admin group may gain root privileges
%admin ALL=(ALL) NOPASSWD:ALL
This is prolly related to this mention in the faq, but also if the command doesn't return 0 (unix standard for all good) it'll fail-fast unless you tell it to warn only.
Related
I'm using Supervisor to demonize a Python / Liquidsoap application. When I start the application from the command line, things are working fine.
When I run the same application using supervisorctl the Liquidsoap implementation fails when trying to access the audio device:
[lineout:3] Using ALSA 1.1.8.
[clock.wallclock_alsa:2] Error when starting output lineout: Failure("Error while setting open_pcm: No such file or directory")!
The USB Audio Interface is accessed via ALSA. The Supervisor Configuration has the correct user set and the service is started with this very user:
[program:aura-engine]
user = engineuser
directory = /opt/aura/engine
command = /opt/aura/engine/run.sh engine
priority = 666
autostart = true
autorestart = true
stopsignal = TERM
redirect_stderr = true
stdout_logfile = /var/log/aura/engine-core-stdout.log
stderr_logfile = /var/log/aura/engine-core-error.log
Any ideas if there are any additional hardware permission issues involved when using Supervisord?
It turned out, that starting the application with root (root user in Supervisor config, but also starting supervisord as root, plus starting the service with sudo supervisorctl start...) successfully grants access to the audio hardware. But running the app as root is not an option. This also issues a warning by Supervisor.
Then I returned the configuration to the desired engineuser, and did reload the configuration with sudo:
sudo supervisorctl reload
Now, suddenly I'm able to start the app without root/sudo and have full access to the audio hardware:
supervisorctl start aura-engine
TL;DR Not able to see syslog logs in influxdb
Environment:
OS: mac os mojave
telegraf version: 1.8
influxdb version: 1.6.4
So I wanted to view logs in chronograf and figured out from the set of input plugins offered in telegraf to use the syslog plugin.
I have followed the instructions here, but have added the steps here too for easy read.
I installed rsyslog via homebrew as follows:
$ brew install rsyslog
Added the following in /usr/local/etc/rsyslog.conf:
$WorkDirectory /tmp/rsyslog # temporary directory for storing data
$ActionQueueType LinkedList # use asynchronous processing
$ActionQueueFileName srvrfwd # set file name, also enables disk mode
$ActionResumeRetryCount -1 # infinite retries on insert failure
$ActionQueueSaveOnShutdown on # save in-memory data if rsyslog shuts down
$ModLoad imudp #loads the udp module
#listen for messages on udp localhost:514
$UDPServerAddress localhost
$UDPServerRun 514
*.* ##(o)127.0.0.1:6514;RSYSLOG_SyslogProtocol23Format
restarted rsyslog:
$ sudo brew services restart rsyslog
I configured telegraf as follows:
# # Accepts syslog messages per RFC5425
[[inputs.syslog]]
# ## Specify an ip or hostname with port - eg., tcp://localhost:6514, tcp://10.0.0.1:6514
# ## Protocol, address and port to host the syslog receiver.
# ## If no host is specified, then localhost is used.
# ## If no port is specified, 6514 is used (RFC5425#section-4.1).
server = "tcp://localhost:6514"
and restarted telegraf as follows:
$ brew services restart telegraf
But my expectation was to see syslog measurement inside of telegrafdatabase
I wrote the following python script to log to syslog hoping that it would appear in the database:
import logging
import logging.handlers
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)
handler = logging.handlers.SysLogHandler(address='/var/run/syslog')
my_logger.addHandler(handler)
my_logger.debug('this is debug')
my_logger.critical('this is critical')
but to no avail.
What could be wrong here? If so is there a log file I could check in?
EDIT 1:
So I troubleshooted rsyslog using the rsyslogd -N1 command and found some issues:
The configuration file must be in /etc/rsyslog.conf.
The configuration file working directory i.e; /tmp/rsyslog wasn't found so created one.
There were some errors in rsyslog.conf file (can be seen in edits of this question)
After doing all the above I restarted rsyslog and also influxdb and telegraf and checked again if there were any errors by running the rsyslogd -N1 command and the following is the output:
rsyslogd: version 8.37.0, config validation run (level 1), master config /etc/rsyslog.conf
rsyslogd: End of config validation run. Bye.
Still the same issue persists.
EDIT 2:
related post: syslog-plugin-from-remote-server
I am writing a command line app in python using the click module that will SSH to Cisco network device and send configuration commands over the connection using the netmiko module. The problem I'm running into is that SSH-ing to the network device requires a hostname/IP, username, and password. I am trying to implement a way for a user of my script to login to a device once and keep the SSH connection open, allowing subcommands to be run without logging in each time. For example,
$ myapp ssh
hostname/IP: 10.10.110.10
username: user
password: ********
Connected to device 10.10.101.10
$ myapp command1
log output
$ myapp --option command2
log output
$ myapp disconnect
closing connection to 10.10.101.10
How would I go about storing/handling credentials to allow this functionality in my cli? I have seen recommendations of caching or OAuth in researching this issue, but I'm still not sure how to implement this or what the recommended and safe way to do this is.
Perhaps you are attempting something like this.
$ myapp ssh -u user -p password
(myapp) command1
(myapp) command2
(myapp) disconnect
$
Python has a standard library module cmd that may help:
https://docs.python.org/3.5/library/cmd.html
I have this Jenkins build configuration for my Django application in "Execute Windows batch command" field:
// Code is downloaded using Git plugin
virtualenv data/.venv
call data/.venv/Scripts/activate.bat
pip install -r requirements/local.txt
cd src/
python .\manage.py test
cd ..
fabric dev deploy // Build job get stuck here
All steps work OK except the last one. Jenkins gets stuck on first Fabric attempt to connect to remote server. In "Console output" is spinner keep spinning and I need to kill build manually.
When I run Fabric task manually from CLI, it works. I read about some problems with Jenkins+known_hosts, so I tried env.reject_unknown_hosts = True in fabfile to see if there is "Add to authorized keys" question.
Fabfile is pretty standard, nothing special:
#task
def dev():
env.user = "..."
env.hosts = "..."
env.key_filename = "..."
nv.reject_unknown_hosts = True
#task
def deploy():
local("python src/manage.py check") # <---- OK, output is in Jenkins
run('git reset --hard') # <---- Jenkins will freeze
run('git pull --no-edit origin master')
# etc ....
print("Done.")
These requires a password, the process is probbaly stuck asking for user's password.
Add --no-pty to the command to make sure it's not blocking and reporting the error.
It is than solved based o your specific remote/ssh/tty setup.
I'm trying to prefill env.password using --initial-password-prompt, but remote is throwing back some strangeness. Let's say that I'm trying to cat a root-owned file as testuser, with 600 permissions on the file. I'm calling sudo('cat /home/testuser/test.txt'), and getting this back:
[testuser#testserver] sudo: cat /home/testuser/test.txt
[testuser#testserver] out: cat: /home/testuser/test.txt: Permission denied
[testuser#testserver] out:
Fatal error: sudo() received nonzero return code 1 while executing!
Requested: cat /home/testuser/test.txt
Executed: sudo -S -p 'sudo password:' -u "testuser" /bin/bash -l -c "cat /home/testuser/test.txt"
Is that piping the prompt right back into the input? I tried using sudo() with pty=False to see if it was an issue with the pseudoterminal, but to no avail.
Here's the weird part: calling run('sudo cat /home/testuser/test.txt') and invoking fab without --initial-password-prompt passes back a password prompt from remote, and on entering the password, everything works fine.
Naturally, running ssh -t testuser#testserver 'sudo cat /home/user/test.txt' prompts for a password and returns the contents of the file correctly. Do I have an issue with my server's shell config, or is the issue with how I'm using sudo()?
Down the line, I'm likely to set up a deploy user with no-password sudo and restricted commands. That'll probably moot the issue, but I'd like to figure this one out if possible. I'm running an Ubuntu 14.10 VPS, in case that's relevant.
Oh, my mistake. I had foolishly set env.sudo_user to my deploy user testuser, thinking that it was specifying the invoking user on remote. In fact, it was specifying the target user, and I was attempting to sudo into myself. Whoops.