Attempting to follow the tutorial here.
After doing basic CRUD stuff, the tutorial has you edit the app.config file to use the LevelDB Backend for 2i, which in my case meant updating line 83 of /usr/local/Cellar/riak/1.4.2/etc/app.config from
{storage_backend, riak_kv_bitcask_backend},
to
{storage_backend, riak_kv_eleveldb_backend},
and then restarting riak.
I had thus far been running riak on port 8098, and their tutorial here references port 10017 instead:
# Starting Client
client = riak.RiakClient(pb_port=10017, protocol='pbc')
# Creating Buckets
customer_bucket = client.bucket('Customers')
order_bucket = client.bucket('Orders')
order_summary_bucket = client.bucket('OrderSummaries')
# Storing Data
cr = customer_bucket.new(str(customer['customer_id']),
data=customer)
cr.store()
if I try to run the code as written, I get this:
File "r.py", line 104, in <module>
cr.store()
File ".../riak/riak-python-client-master/riak/riak_object.py", line 281, in store
timeout=timeout)
File ".../riak/client/transport.py", line 127, in wrapper
return self._with_retries(pool, thunk)
File ".../riak/client/transport.py", line 82, in _with_retries
raise e.args[0]
socket.error: [Errno 61] Connection refused
suggesting that, clearly, the Riak client is not running on port 10017.
However, when I change it to port 8098
client = riak.RiakClient(pb_port=8098, protocol='pbc')
the application just freezes on the cr.store() line. Is there some thing where the eleveldb backend is expecting to run on a port other than default?
never mind, 8098 is the http port, when I changed it to port 8087, the default port, it works fine
Related
I am trying to connect to a remote server and list files inside a path using the below code:
import pysftp
myHostname = "myhostname.com"
myPort = <someportnumber>
myUsername = "<valid username>"
myPassword = "<valid password>"
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
with pysftp.Connection(host=myHostname, username=myUsername, password=myPassword, cnopts= cnopts, port=myPort) as sftp:
print("Connection succesfully established ... ")
sftp.chdir('/logs/dev')
# Obtain structure of the remote directory
directory_structure = sftp.listdir_attr()
# Print data
for attr in directory_structure:
print(attr.filename, attr)
But when I am running it, It is unable to establish a connection. It's throwing below exception:
Traceback (most recent call last):
File "c:/Users/611841191/Documents/SFTP File Download/SFTPFileDownload.py", line 11, in <module>
with pysftp.Connection(host=myHostname, username=myUsername, password=myPassword, cnopts= cnopts, port=myPort) as sftp:
File "C:\Users\611841191\AppData\Roaming\Python\Python38\site-packages\pysftp\__init__.py", line 140, in __init__
self._start_transport(host, port)
File "C:\Users\611841191\AppData\Roaming\Python\Python38\site-packages\pysftp\__init__.py", line 176, in _start_transport
self._transport = paramiko.Transport((host, port))
File "C:\Users\611841191\AppData\Roaming\Python\Python38\site-packages\paramiko\transport.py", line 415, in __init__
raise SSHException(
paramiko.ssh_exception.SSHException: Unable to connect to <myhostname.com>: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond
Can anyone help me out with why this exception is being thrown because I tried with other remote servers and the code seems to be working fine for them? The code is throwing exception for one particular remote server only.
If your code cannot connect somewhere using some protocol (SFTP in this case), the first thing to test is, whether you can connect using the same protocol using any existing GUI/commandline client from the same machine that runs your code.
If that does not work either, you do not have a programming question, but a simple network connectivity problem.
If you need a help resolving the problem, please go to Super User or Server Fault.
I'm getting a weird error while trying to execute an RPC using thrift on python. I have found similar issues online, but none of them really apply to my situation.
Here is the error I'm getting
No handlers could be found for logger "thrift.transport.TSocket"
Traceback (most recent call last):
File "experiment.py", line 71, in <module>
transport.open()
File "/usr/local/lib/python2.7/dist-packages/thrift/transport/TTransport.py", line 152, in open
return self.__trans.open()
File "/usr/local/lib/python2.7/dist-packages/thrift/transport/TSocket.py", line 113, in open
raise TTransportException(TTransportException.NOT_OPEN, msg)
thrift.transport.TTransport.TTransportException: Could not connect to any of [('192.168.178.44', 9000)]
The following is, I believe, the code which produces it.
TECS_SERVER_IP = "192.168.178.44"
TECS_SERVER_PORT = 9000
transport = TSocket.TSocket(TECS_SERVER_IP, TECS_SERVER_PORT)
transport = TTransport.TBufferedTransport(transport)
protocol = TBinaryProtocol.TBinaryProtocol(transport)
client = TTSService.Client(protocol)
transport.open()
This happens whenever I try to communicate to another machine, so I tried with the ip "127.0.0.1" and it works. However, using the IP of the localhost "192.168.178.44" which should refer to the same computer also produces the error.
To me it seems like it cannot resolve IP addresses for some reason...
Any ideas on what's causing this and how to fix it?
I'm using Python 2.7.12, thrift 0.9.3 and Ubuntu 16.04, but I also got the error on Windows 10.
This is how my thrift service starts
handler = TTSHandler()
handler.__init__()
transport = TSocket.TServerSocket(host='localhost', port=9000)
processor = TTSService.Processor(handler)
tfactory = TTransport.TBufferedTransportFactory()
pfactory = TBinaryProtocol.TBinaryProtocolFactory()
server = TServer.TSimpleServer(processor, transport, tfactory, pfactory)
server.serve()
your server should bind to that address before client could connect to:
TSocket.TServerSocket(host='192.168.178.44', port=9000)
or use host='0.0.0.0' which means bind on all IPv4 addresses on the machine.
I'm getting re-started with App Engine after not having used it in a while. I'm using the 64-bit Linux Go runtime, version 1.8.1.
I believe I'm following the steps from the documentation correctly, and I believe I'm doing what's worked correctly in the past, but I'm getting this error when attempting to launch dev_appserver.py:
$ dev_appserver.py .
INFO 2013-07-11 07:24:45,919 sdk_update_checker.py:244] Checking for updates to the SDK.
INFO 2013-07-11 07:24:46,230 sdk_update_checker.py:288] This SDK release is newer than the advertised release.
WARNING 2013-07-11 07:24:46,443 simple_search_stub.py:955] Could not read search indexes from /tmp/appengine.batterybotinfo.darshan/search_indexes
Traceback (most recent call last):
File "/home/darshan/bin/dev_appserver.py", line 182, in
_run_file(__file__, globals())
File "/home/darshan/bin/dev_appserver.py", line 178, in _run_file
execfile(script_path, globals_)
File "/home/darshan/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 695, in
main()
File "/home/darshan/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 688, in main
dev_server.start(options)
File "/home/darshan/software/google_appengine/google/appengine/tools/devappserver2/devappserver2.py", line 659, in start
apis.start()
File "/home/darshan/software/google_appengine/google/appengine/tools/devappserver2/api_server.py", line 137, in start
super(APIServer, self).start()
File "/home/darshan/software/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 295, in start
if self._start_all_dynamic_port(host_ports):
File "/home/darshan/software/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 348, in _start_all_dynamic_port
server.start()
File "/home/darshan/software/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 194, in start
socket.SOCK_STREAM, 0, socket.AI_PASSIVE)
TypeError: getaddrinfo() argument 1 must be string or None
My first thought was that I might be using an incorrect version of Python. Sure enough, I'm using 2.7.5, and the documentation clearly states that 2.5 is necessary. However, the documentation seems to be outdated, because after installing 2.5 and setting my system to use it, I got this error:
Error: Python 2.5 is not supported. Please use version 2.7.
Okay, so back to 2.7.5 and my initial error.
I'm not sure if this is a bug in the dev_appserver.py Python code (I'm guessing not, as it's been out for a month), an issue with my Python installation, or something else about my system that isn't configured according to Google's expectations.
I'd rather not mess with the dev_appserver.py code unless necessary, but I'm happy to poke at it to help figure out what's going wrong. The error is on line 194; here are lines 190-195:
# AF_INET or AF_INET6 socket
# Get the correct address family for our host (allows IPv6 addresses)
host, port = self.bind_addr
try:
info = socket.getaddrinfo(host, port, socket.AF_UNSPEC,
socket.SOCK_STREAM, 0, socket.AI_PASSIVE)
I've determined that the containing method is called twice. The first time host is always "127.0.0.1" and port is 0. The second time is the one that crashes; host is always 10 (an int, not a string), and port is a seemingly-random five-digit int.
I've tried hard-coding host to "127.0.0.1" and port to either 8080 or 0, but then I get another error. I feel in over my head, and I suspect I'm not going to resolve the real issue by changing things I don't really understand. Googling for the error message hasn't helped.
Persistent Googling eventually paid off. Despite this question having a very different (and much more informative) error message, the solution turned out to be the same: ensure that /etc/hosts contains only a single entry for localhost.
Notably, my system contained both of these lines:
127.0.0.1 localhost
::1 localhost
Commenting out the second (and adding a comment to document why) solved my issue:
127.0.0.1 localhost
# Having multiple localhost entries causes App Enginge dev_appserver.py to fail.
# IPv6 not currently needed, and the dev server IS needed, so commenting out.
#::1 localhost
dev_appserver.py now starts and works properly.
If you edit lines 189-194 to this, it should work until Google releases an update. This is basically just checking the type of host and returning early if it isn't a string.
189 # AF_INET or AF_INET6 socket
190 # Get the correct address family for our host (allows IPv6 addresses)
191 host, port = self.bind_addr
192 try:
193 if type(host) is not str:
194 return
195 info = socket.getaddrinfo(host, port, socket.AF_UNSPEC,
196 socket.SOCK_STREAM, 0, socket.AI_PASSIVE)
im on win7 ,i started a tutorial helloworld.py and everything doing fine but i don't know how to quit the service.i use
quit()
but command line give me an error message and exit.but service is still running and take my 8080 port. i hadn't find a way to shut down it manually.
File "C:\python32\lib\site-packages\cherrypy\process\wspbus.py", line 197, in
publish
output.append(listener(*args, **kwargs))
File "C:\python32\lib\site-packages\cherrypy\_cpserver.py", line 151, in start
ServerAdapter.start(self)
File "C:\python32\lib\site-packages\cherrypy\process\servers.py", line 167, in
start
wait_for_free_port(*self.bind_addr)
File "C:\python32\lib\site-packages\cherrypy\process\servers.py", line 410, in
wait_for_free_port
raise IOError("Port %r not free on %r" % (port, host))
IOError: Port 8080 not free on '0.0.0.0'
According to this page, quit() is not appropriate for this task.
Depending on how you run your server, you should consider using cherrypy.engine.exit:
>>> help(cherrypy.engine.exit)
exit(self) method of cherrypy.process.win32.Win32Bus instance
Stop all services and prepare to exit the process.
Include this in your python file.
#cherrypy.expose
def shutdown(self):
cherrypy.engine.exit()
Then add a link on your page.
<a id="shutdown"; href="./shutdown">Shutdown Server</a>
newcomer and first ever question here.
I am using the multiprocessing module of Python which is currently creating a Manager and a couple (45) processes on my localhost.
My Manager is set up as following:
manager = QueueManager(address=('', 50000), authkey='abracadabra')
manager.get_server().serve_forever()
I want also to create some other client processes remotely on another computer. So, let's say my IP is a.b.c.d, the Manager of the client in the remote computer is set up as following:
manager = QueueManager(address=('a.b.c.d', 50000), authkey='abracadabra')
manager.connect()
(yes, it's copy-pasted from the documentation).
However, I run the server and all 45 processes in localhost are fine, then I run the remote client and I get this:
Traceback (most recent call last):
File "govmap-parallel-crawler-client.py", line 144, in <module>
manager.connect()
File "/usr/lib/python2.6/multiprocessing/managers.py", line 474, in connect
conn = Client(self._address, authkey=self._authkey)
File "/usr/lib/python2.6/multiprocessing/connection.py", line 134, in Client
c = SocketClient(address)
File "/usr/lib/python2.6/multiprocessing/connection.py", line 252, in SocketClient
s.connect(address)
File "<string>", line 1, in connect
socket.error: [Errno 110] Connection timed out
Both computers can ping and ssh each other without problems.
My guess: there is one (or two!) firewall in between making the connection impossible. Is this correct?
If yes: is there a way to use a safe known port in order to avoid the firewall or maybe a more polite solution?
If no: what is happening?
Thanks!!
Use an ssh tunnel for interconnect? E.g on client:
ssh a.b.c.d -L12345:localhost:50000
If the client connects to localhost port 12345, it should be tunnelled to a.b.c.d port 50000.
EDIT: Of course, using an SSH tunnel might not be the best solution in a production environment, but at least it lets you eliminate other issues.
As defined by your snippet the server listens only on localhost (127.0.0.1) and not (a.b.c.d) thus you cannot connect from remote client.
To do so use:
manager = QueueManager(address=('a.b.c.d', 50000), authkey='abracadabra')
manager.get_server().serve_forever()