configuring nginx to work with python3.2 - python

I'm having some trouble configuring nginx to work with Python3.2. I'm also struggling to find anything resembling a decent tutorial on the subject. I did however find a decent tutorial on getting nginx to play nice with Python2.7. My thought process was that since uwsgi works with plugins it should be a relatively simple exercise to follow the Python2.7 tutorial and just swap out the python plugin.
Here is the tutorial I followed to get a basic Hello World site working: https://library.linode.com/web-servers/nginx/python-uwsgi/ubuntu-12.04-precise-pangolin
/etc/uwsgi/apps_available/my_site_url.xml looks like:
<uwsgi>
<plugin>python</plugin>
<socket>/run/uwsgi/app/my_site_urlmy_site_url.socket</socket>
<pythonpath>/srv/www/my_site_url/application/</pythonpath>
<app mountpoint="/">
<script>wsgi_configuration_module</script>
</app>
<master/>
<processes>4</processes>
<harakiri>60</harakiri>
<reload-mercy>8</reload-mercy>
<cpu-affinity>1</cpu-affinity>
<stats>/tmp/stats.socket</stats>
<max-requests>2000</max-requests>
<limit-as>512</limit-as>
<reload-on-as>256</reload-on-as>
<reload-on-rss>192</reload-on-rss>
<no-orphans/>
<vacuum/>
</uwsgi>
Once everything was working installed uwsgi-plugin-python3 via apt-get. ls -l /usr/lib/uwsgi/plugins/ now outputs:
-rw-r--r-- 1 root root 142936 Jul 17 2012 python27_plugin.so
-rw-r--r-- 1 root root 147192 Jul 17 2012 python32_plugin.so
lrwxrwxrwx 1 root root 38 May 17 11:44 python3_plugin.so -> /etc/alternatives/uwsgi-plugin-python3
lrwxrwxrwx 1 root root 37 May 18 12:14 python_plugin.so -> /etc/alternatives/uwsgi-plugin-python
Changing python to python3 or python32 in my_site_url.xml has the same effect, ie:
The hello world page takes ages to load (it was effectively instantanious before) and then comes up blank
My site's access log records access
my site's error log records no new error
/var/log/uwsgi/app/my_site_url.log records the following:
[pid: 4503|app: 0|req: 1/2] 192.168.1.5 () {42 vars in 630 bytes} [Sun May 19 10:49:12 2013] GET / => generated 0 bytes in 0 msecs (HTTP/1.1 200) 2 headers in 65 bytes (1 switches on core 0)
so My question is:
How can I correctly configure this app to work on Python3.2

The listed tutorial has the following application code:
def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
This is incompatable with python3.2 because it expects a bytes object. Replacing the application function with the following fixes things:
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return b"Hello World"

Related

antlr4 python binding for java8 missing start rule

I'm trying to use antlr on one of the grammars here (specifically, java 8):
$ antlr4 -Dlanguage=Python3 grammars-v4/java/java8/Java8Lexer.g4
$ antlr4 -Dlanguage=Python3 grammars-v4/java/java8/Java8Parser.g4
This step appears to go smooth, and when I inspect the directory:
$ ls -l grammars-v4/java/java8/*.py
-rw-r--r-- 1 root root 56583 Nov 30 14:32 grammars-v4/java/java8/Java8Lexer.py
-rw-r--r-- 1 root root 798326 Nov 30 14:32 grammars-v4/java/java8/Java8Parser.py
-rw-r--r-- 1 root root 79928 Nov 30 14:32 grammars-v4/java/java8/Java8ParserListener.py
everything is there. Still, when I try to use the hello world example:
from antlr4 import *
from Java8Lexer import Java8Lexer
from Java8Parser import Java8Parser
input_stream = FileStream("/main.java")
lexer = Java8Lexer(input_stream)
stream = CommonTokenStream(lexer)
parser = Java8Parser(stream)
tree = parser.startRule()
I get an error:
AttributeError: 'Java8Parser' object has no attribute 'startRule'
The parser method(s), startRule in your case, correspond to the parser rules defined in the .g4 grammar.
Look into the Java grammar, there is a parser rule with EOF in it called compilationUnit. Use that instead:
tree = parser.compilationUnit()

SysLogHandler messages grouped on one line on remote server

I am trying to use python logging module to log messages to a remote rsyslog server. The messages are received, but its concatenating the messages together on one line for each message. Here is an example of my code:
to_syslog_priority: dict = {
Level.EMERGENCY: 'emerg',
Level.ALERT: 'alert',
Level.CRITICAL: 'crit',
Level.ERROR: 'err',
Level.NOTICE: 'notice',
Level.WARNING: 'warning',
Level.INFO: 'info',
Level.DEBUG: 'debug',
Level.PERF: 'info',
Level.AUDIT: 'info'
}
#staticmethod
def make_logger(*, name: str, log_level: Level, rsyslog_address: Tuple[str, int], syslog_facility: int) -> Logger:
"""Initialize the logger with the given attributes"""
logger = logging.getLogger(name)
num_handlers = len(logger.handlers)
for i in range(0, num_handlers):
logger.removeHandler(logger.handlers[0])
logger.setLevel(log_level.value)
syslog_priority = Log.to_syslog_priority[log_level]
with Timeout(seconds=RSYSLOG_TIMEOUT, timeout_message="Cannot reach {}".format(rsyslog_address)):
sys_log_handler = handlers.SysLogHandler(rsyslog_address, syslog_facility, socket.SOCK_STREAM)
# There is a bug in the python implementation that prevents custom log levels
# See /usr/lib/python3.6/logging/handlers.SysLogHandler.priority_map on line 789. It can only map
# to 5 log levels instead of the 8 we've implemented.
sys_log_handler.mapPriority = lambda *args: syslog_priority
logger.addHandler(sys_log_handler)
stdout_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stdout_handler)
return logger
if __name__ == '__main__':
app_logger = Log.make_logger(name='APP', log_level=Log.Level.INFO, rsyslog_address=('localhost', 514),
syslog_facility=SysLogHandler.LOG_USER)
audit_logger = Log.make_logger(name='PERF', log_level=Log.Level.INFO, rsyslog_address=('localhost', 514),
syslog_facility=SysLogHandler.LOG_LOCAL0)
perf_logger = Log.make_logger(name='AUDIT', log_level=Log.Level.INFO, rsyslog_address=('localhost', 514),
syslog_facility=SysLogHandler.LOG_LOCAL1)
log = Log(log_level=Log.Level.WARNING, component='testing', worker='tester', version='1.0', rsyslog_srv='localhost',
rsyslog_port=30514)
app_logger.warning("Testing warning logging")
perf_logger.info("Testing performance logging1")
audit_logger.info("Testing aduit logging1")
audit_logger.info("Testing audit logging2")
app_logger.critical("Testing critical logging")
perf_logger.info("Testing performance logging2")
audit_logger.info("Testing audit logging3")
app_logger.error("Testing error logging")
On the server side, I'm added the following the following line to the /etc/rsyslog.d/50-default.conf to disable /var/log/syslog logging for USER, LOCAL0 and LOCAL1 facilities (which I use for app, perf, and audit logging).
*.*;user,local0,local1,auth,authpriv.none -/var/log/syslog
And I update the to the /etc/rsyslog.config:
# /etc/rsyslog.conf Configuration file for rsyslog.
#
# For more information see
# /usr/share/doc/rsyslog-doc/html/rsyslog_conf.html
#
# Default logging rules can be found in /etc/rsyslog.d/50-default.conf
#################
#### MODULES ####
#################
module(load="imuxsock") # provides support for local system logging
#module(load="immark") # provides --MARK-- message capability
# provides UDP syslog reception
#module(load="imudp")
#input(type="imudp" port="514")
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
# provides kernel logging support and enable non-kernel klog messages
module(load="imklog" permitnonkernelfacility="on")
###########################
#### GLOBAL DIRECTIVES ####
###########################
#
# Use traditional timestamp format.
# To enable high precision timestamps, comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# Filter duplicated messages
$RepeatedMsgReduction on
#
# Set the default permissions for all log files.
#
$FileOwner syslog
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
$PrivDropToUser syslog
$PrivDropToGroup syslog
#
# Where to place spool and state files
#
$WorkDirectory /var/spool/rsyslog
#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf
user.* -/log/app.log
local0.* -/log/audit.log
local1.* -/log/perf.log
So after doing all that when I run the python code (listed above) these are these are the messages I'm seeing on the remote server:
for log in /log/*.log; do echo "${log} >>>"; cat ${log}; echo "<<< ${log}"; echo; done
/log/app.log >>>
Oct 23 13:00:23 de4bba6ac1dd rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted.
Oct 23 13:01:34 Testing warning logging#000<14>Testing critical logging#000<14>Testing error logging
<<< /log/app.log
/log/audit.log >>>
Oct 23 13:01:34 Testing aduit logging1#000<134>Testing audit logging2#000<134>Testing audit logging3
<<< /log/audit.log
/log/perf.log >>>
Oct 23 13:01:34 Testing performance logging1#000<142>Testing performance logging2
<<< /log/perf.log
As you can see the messages are being filtered to the proper log file, but they're being concatenated onto one line. I'm guessing that its doing it because they arrive at the same time, but I'd like the messages to be split onto separate lines.
In addition, I've tried adding a formatter to the SysLogHandler so that it inserts a line break to the message like this:
sys_log_handler.setFormatter(logging.Formatter('%(message)s\n'))
However, this really screws it up:
for log in /log/*.log; do echo "${log} >>>"; cat ${log}; echo "<<< ${log}"; echo; done
/log/app.log >>>
Oct 23 13:00:23 de4bba6ac1dd rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted.
Oct 23 13:01:34 Testing warning logging#000<14>Testing critical logging#000<14>Testing error logging
Oct 23 13:12:00 Testing warning logging
Oct 23 13:12:00 172.17.0.1 #000<134>Testing audit logging2
Oct 23 13:12:00 172.17.0.1 #000<14>Testing critical logging
Oct 23 13:12:00 172.17.0.1 #000<142>Testing performance logging2
Oct 23 13:12:00 172.17.0.1 #000<134>Testing audit logging3
Oct 23 13:12:00 172.17.0.1 #000<14>Testing error logging
Oct 23 13:12:00 172.17.0.1
<<< /log/app.log
/log/audit.log >>>
Oct 23 13:01:34 Testing aduit logging1#000<134>Testing audit logging2#000<134>Testing audit logging3
Oct 23 13:12:00 Testing aduit logging1
<<< /log/audit.log
/log/perf.log >>>
Oct 23 13:01:34 Testing performance logging1#000<142>Testing performance logging2
Oct 23 13:12:00 Testing performance logging1
<<< /log/perf.log
As you can see the first message gets put into the right file for the audit and performance logs, but then all the other messages get put into the application log file. However, there is definitely a line break now.
My question is, I want to filter the messages based on facility, but with each message on a seperate line. How can I do this using the python logging library? I guess I could take a look at the syslog library?
So I came across this python bug:
https://bugs.python.org/issue28404
So I took a look at the source code (nice thing about python), specifically the SysLogHander.emit() method:
def emit(self, record):
"""
Emit a record.
The record is formatted, and then sent to the syslog server. If
exception information is present, it is NOT sent to the server.
"""
try:
msg = self.format(record)
if self.ident:
msg = self.ident + msg
if self.append_nul:
# Next line is always added by default
msg += '\000'
# We need to convert record level to lowercase, maybe this will
# change in the future.
prio = '<%d>' % self.encodePriority(self.facility,
self.mapPriority(record.levelname))
prio = prio.encode('utf-8')
# Message is a string. Convert to bytes as required by RFC 5424
msg = msg.encode('utf-8')
msg = prio + msg
if self.unixsocket:
try:
self.socket.send(msg)
except OSError:
self.socket.close()
self._connect_unixsocket(self.address)
self.socket.send(msg)
elif self.socktype == socket.SOCK_DGRAM:
self.socket.sendto(msg, self.address)
else:
self.socket.sendall(msg)
except Exception:
self.handleError(record)
As you can see it adds a '\000' to the end of the message by default, so if I set this to False and then set a Formatter that adds a line break, then things work the way I expect. Like this:
sys_log_handler.mapPriority = lambda *args: syslog_priority
# This will add a line break to the message before it is 'emitted' which ensures that the messages are
# split up over multiple lines, see https://bugs.python.org/issue28404
sys_log_handler.setFormatter(logging.Formatter('%(message)s\n'))
# In order for the above to work, then we need to ensure that the null terminator is not included
sys_log_handler.append_nul = False
Thanks for your help #Sraw, I tried to use UDP, but never got the message. After applying these changes this is what I see in my log files:
$ for log in /tmp/logging_test/*.log; do echo "${log} >>>"; cat ${log}; echo "<<< ${log}"; echo; done
/tmp/logging_test/app.log >>>
Oct 23 21:06:40 083c9501574d rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted.
Oct 23 21:06:45 Testing warning logging
Oct 23 21:06:45 Testing critical logging
Oct 23 21:06:45 Testing error logging
<<< /tmp/logging_test/app.log
/tmp/logging_test/audit.log >>>
Oct 23 21:06:45 Testing audit logging1
Oct 23 21:06:45 Testing audit logging2
Oct 23 21:06:45 Testing audit logging3
<<< /tmp/logging_test/audit.log
/tmp/logging_test/perf.log >>>
Oct 23 21:06:45 Testing performance logging1
Oct 23 21:06:45 Testing performance logging2
<<< /tmp/logging_test/perf.log
I believe tcp stream makes it more complicated. When you are using tcp stream, rsyslog won't help you to split the message, it is all on your own.
Why not use udp protocol? In this case, every single message will be treated as a single log. So you don't need to add \n manually. And manually adding \n will makes you impossible to log multiple line logs correctly.
So my suggestion is changing to udp protocol and:
# Disable escaping to accept multiple line log
$EscapeControlCharactersOnReceive off
I came accross the same issue.
capture multiline events with rsyslog and storing them to file
The cause is related to the question above.
NULL character is not handled as delimilter in imtcp.
My solution was setting AddtlFrameDelimiter="0", something like below.
module(load="imtcp" AddtlFrameDelimiter="0")
input(type="imtcp" port="514")
reference:
AddtlFrameDelimiter
Syslog Message Format
related rsyslog code
EDITED 2022/10/04
The previous example was using input parameter.
And AddtlFrameDelimiter's input paramter has been newly supported since v8.2106.0.
Therefore, I changed the input paramter's example into old style module golbal parameter's one.
# module golbal parameter case, supported since v6.XXXX
module(load="imtcp" AddtlFrameDelimiter="0")
input(type="imtcp" port="514")
# input paramter case, supported since v8.2106.0
module(load="imtcp")
input(type="imtcp" port="514" AddtlFrameDelimiter="0")

Shell Script on iMac no longer working with High Sierra

I recently upgraded my iMac 27” (mid-2011) from Yosemite to High Sierra and I am struggling to get back some functionality that I had working previously!
To briefly explain… First of all, I grab local weather data from weather underground using some Python scripts on a Raspberry pi3. These scripts also massage the data and then create and store an XML file on the pi. I also, on the pi, run a http server that looks for calls.
On an iPad, using iRule, I have a button that is called ‘Weather Forecast’. When this button is pressed it triggers a network resource on my ISY994i (Insteon) controller that, in turn, makes a call to the http server on the pi sending it a parameter. When the pi receives the call and validates the parameter, it runs another Python script (on the pi) that takes the data in the previously created XML file and puts it into a proper format for the next step. Finally, that script sends GET requests to the iMac, through Apache2, to read the weather forecast out loud.
This was working very well on Yosemite but now that I have upgraded the saying part is not working!
I have 3 shell scripts on the iMac that are called, from the pi, in this process…
saysomethinghttp9a.sh This is the first script called which reads the current volume level and stores it in a local file (on the iMac); then it changes the volume level to an acceptable volume (I use 18);
!/bin/bash
echo -e "Content-type: text/html\n"
PHRASE=`echo "$QUERY_STRING" | sed -n 's/^.*phrase=\([^&]*\).*$/\1/p' | sed "s/+/ /g" | sed "s/%20/ /g"`
cat << junk
<html>
<head>
<title>
saying
</title>
</head>
<body>
junk
currVol=$(osascript -e "get volume settings")
echo "Current Volume Setting = $currVol"
var1=$( echo $currVol | cut -d":" -f1 )
var2=$( echo $currVol | cut -d":" -f2 )
origVol=$( echo $var2 | cut -d"," -f1 )
echo $origVol
parm="set volume output volume $origVol"
echo $parm
destfile="/Users/Sarah/Sound_Volume/Volume1.txt"
echo $parm > $destfile
osascript -e "set volume output volume 18"
cat << junk
</body>
</html>
junk
saysomethinghttp9.sh After the volume level has been set, this script does the ‘say’ part based upon what is sent from the pi. The pi calls this script and sends a parameter, which is the words I want said. This call is repeated several times for the intro, date, time, weather forecast and closing; and
#!/bin/bash
echo -e "Content-type: text/html\n"
PHRASE=`echo "$QUERY_STRING" | sed -n 's/^.*phrase=\([^&]*\).*$/\1/p' | sed "s/+/ /g" | sed "s/%20/ /g"`
cat << junk
<html>
<head>
<title>
saying
</title>
</head>
<body>
junk
say "Hey There"
cat << junk
</body>
</html>
junk
saysomethinghttp9b.sh Finally this last script is called, which reads the original volume from the file created in the first step and then resets the volume to that level.
#!/bin/bash
echo -e "Content-type: text/html\n"
cat << junk
<html>
<head>
<title>
saying
</title>
</head>
<body>
junk
file="/Users/Sarah/Sound_Volume/Volume1.txt"
echo $file
read -d $'\x04' parm < "$file"
echo $parm
osascript -e "$parm"
cat << junk
</body>
</html>
junk
(note that I go through the steps to adjust the volume because the volume for music, from iTunes, is much too loud for the ‘say’ commands)
In trying to figure out what is wrong I have tried numerous things:
I edited the script saysomethinghttp9.sh to eliminate the ‘say’ of a parameter passed to it and simply put in the line say “Hey there” (note that the code above is the edited version)
I then opened up a terminal session on the iMac and issued the commands from there...
./saysomethinghttp9a.sh
./saysomethinghttp9.sh
./saysomethinghttp9b.sh
All 3 scripts worked when called from the terminal so that wasn’t the problem.
To debug the calls to the iMac, I simplified the process by eliminating the iPad, the pi and the ISY994i from the process. Instead, I have been trying to make the calls to the iMac from a PC on the same network using a browser.
http://10.0.1.11/cgi-bin/saysomethinghttp9a.sh
http://10.0.1.11/cgi-bin/saysomethinghttp9.sh
http://10.0.1.11/cgi-bin/saysomethinghttp9a.sh
The result from running the scripts directly from the browser, on the PC, was that script saysomethinghttp9a.sh and saysomethinghttp9b.sh worked but saysomethinghttp9.sh did not!
Here are the Access and Error log entries from the iMac after trying the calls from the browser on the PC…
Access Log
10.0.1.195 - - [18/Dec/2017:21:33:30 -0500] "GET /cgi-bin/saysomethinghttp9a.sh HTTP/1.1" 200 197
10.0.1.195 - - [18/Dec/2017:21:34:04 -0500] "-" 408 -
10.0.1.195 - - [18/Dec/2017:21:33:44 -0500] "GET /cgi-bin/saysomethinghttp9.sh HTTP/1.1" 200 53
10.0.1.195 - - [18/Dec/2017:21:33:49 -0500] "GET /cgi-bin/saysomethinghttp9.sh HTTP/1.1" 200 53
10.0.1.195 - - [18/Dec/2017:21:35:05 -0500] "GET /cgi-bin/saysomethinghttp9b.sh HTTP/1.1" 200 135
Error Log
[Mon Dec 18 21:34:44.356130 2017] [cgi:warn] [pid 29997] [client 10.0.1.195:60109] AH01220: Timeout waiting for output from CGI script /Library/WebServer/CGI-Executables/saysomethinghttp9.sh
[Mon Dec 18 21:34:44.356519 2017] [core:error] [pid 29997] (70007)The timeout specified has expired: [client 10.0.1.195:60109] AH00574: ap_content_length_filter: apr_bucket_read() failed
[Mon Dec 18 21:34:49.949284 2017] [cgi:warn] [pid 29575] [client 10.0.1.195:60107] AH01220: Timeout waiting for output from CGI script /Library/WebServer/CGI-Executables/saysomethinghttp9.sh
[Mon Dec 18 21:34:49.949652 2017] [core:error] [pid 29575] (70007)The timeout specified has expired: [client 10.0.1.195:60107] AH00574: ap_content_length_filter: apr_bucket_read() failed
For full disclosure, my programming experience is relatively limited. I often piece things together using examples that I find online.
I do not know how to interpret the errors noted above! The only information I could find related to "The timeout specified has expired" was related to situations where a lot of data was being dealt with! In my case, there is very little data being processed!
I would appreciate some help or direction on how to proceed.
Edit:
After reading the comments from Mark Setchell, I added into my script the '/usr/bin/id' and ran the script first in the terminal and saw that user name was correct. Then I ran the same script from the other PC and saw that the user name was '_www'! So I then edited the httpd.conf (apache2) file and changed the section include User Sarah and Group staff. However this did not correct the problem!
Next I read up on how to 'use su to become that user and try the script'. Through the readings I kept finding suggestions to use sudo instead and finally found a suggestion to edit the sudoers file. So I did this using the command sudo visudo. Then I added in the following line
Sarah ALL=(ALL) NOPASSWD: ALL
Then I tried running the script from the PC once again however this time the script ran and is saying again!
After reading the comments from Mark Setchell, I added into my script the '/usr/bin/id' and ran the script first in the terminal and saw that user name was correct. Then I ran the same script from the other PC and saw that the user name was '_www'! So I then edited the httpd.conf (apache2) file and changed the section include User Sarah and Group staff. However this did not correct the problem!
Next I read up on how to 'use su to become that user and try the script'. Through the readings I kept finding suggestions to use sudo instead and finally found a suggestion to edit the sudoers file. So I did this using the command sudo visudo. Then I added in the following line
Sarah ALL=(ALL) NOPASSWD: ALL
Then I tried running the script from the PC once again however this time the script ran and is saying again!

Raspberry Pi (Raspbian Linux flavor) run script on wifi up

Currently I am experience issues with the script automatic run after wifi adapter connects to a network.
After ridiculously extended research, I've made several attempts to add script to a /etc/network/if-up.d/. Manually my script works; however it does not automatically.
User permissions:
ls -al /etc/network/if-up.d/*
-rwxr-xr-x 1 root root 703 Jul 25 2011 /etc/network/if-up.d/000resolvconf
-rwxr-xr-x 1 root root 484 Apr 13 2015 /etc/network/if-up.d/avahi-daemon
-rwxr-xr-x 1 root root 4958 Apr 6 2015 /etc/network/if-up.d/mountnfs
-rwxr-xr-x 1 root root 945 Apr 14 2016 /etc/network/if-up.d/openssh-server
-rwxr-xr-x 1 root root 48 Apr 26 03:21 /etc/network/if-up.d/sendemail
-rwxr-xr-x 1 root root 1483 Jan 6 2013 /etc/network/if-up.d/upstart
lrwxrwxrwx 1 root root 32 Sep 17 2016 /etc/network/if-up.d/wpasupplicant -> ../../wpa_supplicant/ifupdown.sh
Also, I've tried to push the command directly in /etc/network/interfaces
by adding a row
post-up /home/pi/r/sendemail.sh
Contents of sendemail.sh:
#!/bin/sh
python /home/pi/r/pip.py
After the reboot, nothing actually happen. I've even tried sudo in front
I assume that wpasupplicant is the thing which causes that, but I cannot get how to run my script in ifupdown.sh script under /etc/wpa_supplicant.
Appreciate your help!
If you have no connectivity prior to initializing the wifi interface, I would suggest adding a cron job of a bash or python script that checks for connectivity every X minutes.
Ping (host);
If host is up then run python commands or external command.
This is rather ambiguous but hopefully is of some help.
Here is an example of a script that will check if a host is alive;
import re,commands
class CheckAlive:
def __init__(self):
myCommand = commands.getstatusoutput('ping ' + 'google.com)
searchString = r'ping: unknown host'
match = re.search(searchString,str(myCommand))
if match:
# host is not alive
print 'no alive, don't do stuff';
else:
# host is alive
print 'alive, time do stuff';

uwsgi - remove "address space usage" message from the log file

I'm using uwsgi version 2.0.8 and it's configured as following:
/home/user/env/bin/uwsgi
--chdir=/home/user/code
--module=XXXXXXX.wsgi:application
--master
--pidfile=/tmp/XXXX.pid
--socket=/tmp/uwsgi.sock
--chmod-socket
--processes=9
--harakiri=2000
--max-requests=50000
--vacuum
--home=/home/user/env
--stats /tmp/stats.socket
--stats-minified
--harakiri-verbose
--listen=2048
--log-date
--log-zero
--log-slow 2000
--log-5xx
--log-4xx
--gevent 5
-m
-b 25000
-L
--wsgi-file /home/user/code/wsgi.py
--touch-chain-reload /home/user/uwsgi.reload
And I'm getting a lot of messages like:
{address space usage: 1444040704 bytes/1377MB} {rss usage: 759623680 bytes/724MB} [pid: 4655|app: 0|req: 2897724/26075292] 184.172.192.42 () {44 vars in 563 bytes} [Tue Jul 7 18:45:17 2015] POST /some/url/ => generated 0 bytes in 34 msecs (HTTP/1.1 204) 1 headers in 38 bytes (3 switches on core 4)
What is the purpose of these messages and how can I remove them?
You told uWSGI to log any response with zero size (--log-zero). Instead if you mean removing the memory report just drop the -m flag (it is a shortcut for --memory-report)

Categories