I need to set the value of a field in an XML file which exists on a remote Linux box. How do I find out which port I should connect to ?
But even a proper ping is not happening:
import xmlrpclib
server = xmlrpclib.ServerProxy('http://10.77.21.240:9000')
print server.ping()
print "I'm in hurray"
bUT instead I got:
Traceback (most recent call last):
File "ping.py", line 3, in <module>
print server.ping()
File "C:\Python26\lib\xmlrpclib.py", line 1199, in __call__
return self.__send(self.__name, args)
File "C:\Python26\lib\xmlrpclib.py", line 1489, in __request
verbose=self.__verbose
File "C:\Python26\lib\xmlrpclib.py", line 1235, in request
self.send_content(h, request_body)
File "C:\Python26\lib\xmlrpclib.py", line 1349, in send_content
connection.endheaders()
File "C:\Python26\lib\httplib.py", line 892, in endheaders
self._send_output()
File "C:\Python26\lib\httplib.py", line 764, in _send_output
self.send(msg)
File "C:\Python26\lib\httplib.py", line 723, in send
self.connect()
File "C:\Python26\lib\httplib.py", line 704, in connect
self.timeout)
File "C:\Python26\lib\socket.py", line 514, in create_connection
raise error, msg
socket.error: [Errno 10061] No connection could be made because the target machine actively refused it.
What did I do wrong?
A couple of things to try / think about:
Go to a command prompt on the remote host and type "netstat -nap | grep 9000". If you don't get back something interesting it means that nothing is running at port 9000.
You show the remote host at 10.77.21.240. This is an unroutable address on the net (AKA Private Network), so is the server itself (not just your app) pingable? If you are on windows, goto Start -> Run and type "cmd". At the prompt type, "ping 10.77.21.240" and see what you get.
One more thought: the process may be up and running at 9000 on a reachable host, but it may have opened the port as 127.0.0.1:9000 instead of 0.0.0.0:9000. The first address will only be reachable by processes on the same machine, the second one will open the port on all available IP addresses the machine has.
Update in response to comment: The fact that it shouldn't be a problem doesn't eliminate the possibility it is. When you are debugging something that should be working, but isn't, you need to get fairly pedantic about checking each step, allowing yourself no room for 'Oh, I know that couldn't be the problem.' -- this is a verbal 'handwave' (often accompanied by a real handwave). You'd be surprised how often the problem exists in exactly the area you are handwaving! It takes 3 seconds to do the ping test. If it works, you move on, if it doesn't work ...
The first three steps in dealing with any system problem are:
Is it plugged in?
Is it turned on?
Is it configured properly?
And you have to do this for each and every piece of hardware/software in the food chain from your keyboard to the app. I'd guess 80% of 'sudden' failures are items 1 or 2 -- yes, really. Cables are a huge pain in the ass.
When on the phone with novices I normally start by going for the long pass -- if they can get news.google.com in a browser and then click on a random story, then I know that in general the network is OK. Why the news and why a random story? To sidestep browser cache issues. I've lost count of the number of times my older sister has called me up and announced "The Internet is broken!" The first thing we do is the google news test. 99% of the time it works, so I have her fire up reverse-WinVNC (UltraVNC's SingleClick is a God-send), I get on her machine, and then we see what the real problem is.
If the long pass doesn't work, then I see if they can get to their router. Etc. etc.
Related
Good day,
I've an app that is uses Cherrypy to server a simple website. From time to time I get DECRYPTION_FAILED_OR_BAD_RECORD_MAC error. I've never seen an issue my self while testing, this only obvious in logs.
[26/Nov/2021:02:50:39] ENGINE Error in HTTPServer.serve
Traceback (most recent call last):
File "/home/user/app/venv/lib/python3.8/site-packages/cheroot/server.py", line 1810, in serve
self._connections.run(self.expiration_interval)
File "/home/user/app/venv/lib/python3.8/site-packages/cheroot/connections.py", line 201, in run
self._run(expiration_interval)
File "/home/user/app/venv/lib/python3.8/site-packages/cheroot/connections.py", line 218, in _run
new_conn = self._from_server_socket(self.server.socket)
File "/home/user/app/venv/lib/python3.8/site-packages/cheroot/connections.py", line 272, in _from_server_socket
s, ssl_env = self.server.ssl_adapter.wrap(s)
File "/home/user/app/venv/lib/python3.8/site-packages/cheroot/ssl/builtin.py", line 277, in wrap
s = self.context.wrap_socket(
File "/usr/lib/python3.8/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.8/ssl.py", line 1040, in _create
self.do_handshake()
File "/usr/lib/python3.8/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: DECRYPTION_FAILED_OR_BAD_RECORD_MAC] decryption failed or bad record mac (_ssl.c:1131)
Is there a simple way for Cherrypy to log this as one line error in logs or is there a way to fix it?
I encountered the same (and also SSLV3_ALERT_BAD_CERTIFICATE). My setup: CherryPy 18.5.0; Python 3.7
I use Self-signed certificate (I think this is the key info for the issue)
Due to the not trusted certificate, browsers indicate that it is not a properly secured connection. Users need to click confirm that they still want to browse the pages.
Access attempts from Edge or Chrome do not trigger this CherryPy error. Firefox seems to send something to the server even before it made sure that the request is to go ahead (i.e. even before confirm).
IMHO, CherryPy should handle the SSL errors (catch the exceptions) and let the users handle them.
Since I cannot control users' browser selection, nor can I catch the SSL exception, my "solution" was to get the users install the self-signed certificate. From that point on, they can browse the pages without warning and no such CherryPy error pops up in the logs anymore.
I know this is a pretty weak solution, but nobody answered, so I thought I'd share this, as it might help someone.
I've tried to render a animation using Network Render. I connected my PC an my Laptop without further problems. But when I clicked "Render animation on network" after some seconds the following error occurs:
AL lib: (EE) UpdateDeviceParams: Failed to set 44100hz, got 48000hz instead
Traceback (most recent call last):
File "F:\Program Files (x86)\Blender\2.74\scripts\addons\netrender\operat ors.py", line 85, in invoke
return self.execute(context)
File "F:\Program Files (x86)\Blender\2.74\scripts\addons\netrender\operat ors.py", line 77, in execute
scene.network_render.job_id = client.sendJob(conn, scene, True)
File "F:\Program Files (x86)\Blender\2.74\scripts\addons\netrender\client .py", line 121, in sendJob
return sendJobBlender(conn, scene, anim, can_save)
File "F:\Program Files (x86)\Blender\2.74\scripts\addons\netrender\client .py", line 340, in sendJobBlender
response = conn.getresponse()
File "F:\Program Files (x86)\Blender\2.74\python\lib\http\client.py", line 1172, in getresponse
response.begin()
File "F:\Program Files (x86)\Blender\2.74\python\lib\http\client.py", line 351, in begin
version, status, reason = self._read_status()
File "F:\Program Files (x86)\Blender\2.74\python\lib\http\client.py", line 313, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "F:\Program Files (x86)\Blender\2.74\python\lib\socket.py", line 371, inreadinto
return self._sock.recv_into(b)
socket.timeout: timed out
I asked Google: Somebody circumvents the problem by changing "the default timeout to 1000 (instead of 300) (in the socket.py file[...])". I can't find this line, I guess they changed it in the current version. Since I have no experience using python I do not know how I can change it now.
I hope you can help me!
The addon would be a better place to make the change instead of the socket module. If you look in your addons folder you will find netrender/utils.py where you will find a few lines that use socket.setdefaulttimeout and you could make some adjustments there.
An even better solution would be to look at why the connection is timing out, two computers in the same room should not get any timeouts. A common cause of timeouts is the inability to get a connection, firewalls are good at stopping connections, so you may want to check that the port used by network render is allowing incoming connections, and that blender is running with network render turned on to accept the connection. The default port is 8000 which could also be in use by another application, you can configure each computer to use a different port if needed.
I have a python server script given below
# create the server, binding to localhost on port 9999
#
server = SocketServer.TCPServer((options.ip, options.port), cmdHandler)
While running a Python server script I am getting the following error:
File "C:\Devcon\OCDServer_New.py", line 225, in runTest
server = SocketServer.TCPServer((options.ip, options.port), cmdHandler)
File "C:\Python27\lib\SocketServer.py", line 419, in __init__
self.server_bind()
File "C:\Python27\lib\SocketServer.py", line 430, in server_bind
self.socket.bind(self.server_address)
File "C:\Python27\lib\socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
**socket.error: [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted**
How to solve this? In one link I found socket shutdown is a solution. But here the issue is coming while creating the socket itself. It might be because it was opened previously and not closed correctly. But I am unable to reset the entire thing and run the script again. Got stuck here.
Try using a different port to see if you get the same message or a different one. Something else could be using the same port therefore your code not being able to use it.
I have a data-intensive Python script that uses HTTP connections to download data. I usually run it overnight. Sometimes the connection will fail, or a website will be unavailable momentarily. I have basic error-handling that catches these exceptions and tries again periodically, exiting gracefully (and logging errors) after 5 minutes of retrying.
However, I've noticed that sometimes the job just freezes. No error is thrown, and the job is still running, sometimes hours after the last print message.
What is the best way to:
monitor a Python script,
detect if it is unresponsive after a given interval,
exit it if it is unresponsive,
and start another one?
UPDATE
Thank you all for your help. As a few of you have pointed out, the urllib and socket modules don't have timeouts set properly. I am using Python 2.5 with the Freebase and urllib2 modules, and catching and handling MetawebErrors and urllib2.URLErrors. Here is a sample of err output after the last script hung for 12 hours:
File "/home/matthew/dev/projects/myapp_module/project/app/myapp/contrib/freebase/api/session.py", line 369, in _httpreq_json
resp, body = self._httpreq(*args, **kws)
File "/home/matthew/dev/projects/myapp_module/project/app/myapp/contrib/freebase/api/session.py", line 355, in _httpreq
return self._http_request(url, method, body, headers)
File "/home/matthew/dev/projects/myapp_module/project/app/myapp/contrib/freebase/api/httpclients.py", line 33, in __call__
resp = self.opener.open(req)
File "/usr/lib/python2.5/urllib2.py", line 381, in open
response = self._open(req, data)
File "/usr/lib/python2.5/urllib2.py", line 399, in _open
'_open', req)
File "/usr/lib/python2.5/urllib2.py", line 360, in _call_chain
result = func(*args)
File "/usr/lib/python2.5/urllib2.py", line 1107, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.5/urllib2.py", line 1080, in do_open
r = h.getresponse()
File "/usr/lib/python2.5/httplib.py", line 928, in getresponse
response.begin()
File "/usr/lib/python2.5/httplib.py", line 385, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.5/httplib.py", line 343, in _read_status
line = self.fp.readline()
File "/usr/lib/python2.5/socket.py", line 372, in readline
data = recv(1)
KeyboardInterrupt
You'll notice the socket error at the bottom. Since I'm using Python 2.5 and don't have access to the third urllib2.urlopen option, is there another way to watch for and catch this error? For example, I'm catching URLErrrors - is there another type of error in urllib2 or socket that I can catch which will help me?
It sounds like there is a bug in your script. The answer is not to monitor the bug, but to hunt down the bug and fix it.
We can't help you find the bug without seeing some code. But as a general idea, you might want to use logging to pinpoint where the problem is occurring, and write unit tests to help you build confidence about which parts of your code do not have the bug.
Another idea is to break your "stuck" program with Ctrl-C and to study the traceback message. It will show you what line your program was last executing.
That may give you a clue where the script is going wrong.
Since the program is doing web communication, I'd fire up a debugging proxy like Charles http://www.charlesproxy.com/ and see if there's anything kooky happening in the back-and-forth between your script and the server.
Also consider that the socket module has no timeout set by default and therefore can hang. As of python 2.6, however, you can pass a third argument to urllib2.urlopen (if you are using urllib2, that is), specifying a request timeout period in seconds. That way the script will error out rather than go catatonic waiting from a response from a perhaps uncooperative server. If you haven't already, I'd check these things out before trying anything more elaborate.
Update for python 2.5:
To do this in python < 2.6, you would have to set the timeout value directly in the socket module, which urllib2 uses. I haven't tried this, but it presumably works. Found this info at http://www.voidspace.org.uk/python/articles/urllib2.shtml:
import socket
import urllib2
# timeout in seconds
timeout = 10
socket.setdefaulttimeout(timeout)
# this call to urllib2.urlopen now uses the default timeout
# we have set in the socket module
req = urllib2.Request('http://www.voidspace.org.uk')
response = urllib2.urlopen(req)
a simple way to do what you ask is to make use of UDP packets sent by your current program to another harvesting program that monitors the output. If it doesn't receive a packet in a certain amount of time, it kills the other python process then restarts another one
You could run your script in pdb and break in when you suspect it's frozen. It won't work on its own, but might help you figure out why it's freezing.
I am trying to run a simple Python based web server given here.
And I get the following error message:
Traceback (most recent call last):
File "webserver.py", line 63, in <module>
main()
File "webserver.py", line 55, in main
server = HTTPServer(('', 80), MyHandler)
File "/usr/lib/python2.5/SocketServer.py", line 330, in __init__
self.server_bind()
File "/usr/lib/python2.5/BaseHTTPServer.py", line 101, in server_bind
SocketServer.TCPServer.server_bind(self)
File "/usr/lib/python2.5/SocketServer.py", line 341, in server_bind
self.socket.bind(self.server_address)
File "<string>", line 1, in bind
socket.error: (13, 'Permission denied')
As far as I understand my firewall blocks access to a socket? Am I right? If it is the case, how can I change the permissions? Is it dangerous to change these permissions?
If you want to bind to port numbers < 1024, you need to be root. It's not a firewall
issue; it's enforced by the operating system. Here's a reference from w3.org,
and a FAQ entry specific to Unix.
If you want to run on a port under 1024, you'll need to be root. You can open the socket and drop root's permission for the rest of your program by switching to another user.
Most times it's easier to run a real webserver (say nginx) on port 80 and proxy the requests through to your program which you can run on a high numbered port (8080 for example). This way you don't need to worry about screwing something up during the time your process is running as root, as it never runs as root.
If it's just for testing, run the server on port 8080 and connect at http://localhost:8080/