On my synology I have this docker container running: https://registry.hub.docker.com/r/mgvazquez/ibgateway/
In the "manual" is says: "In this example you will launch the Interactive Brokers Gateway in paper mode listening on port 4001, and the VNC Server listening on port 5900"
So in the docker container I did the following port mapping:
Local port 32778 to container 5900 and local port 32776 to container 4001. My Synology Nas is 192.168.2.6.
When I connect from my local pc using vnc to 192.168.2.6:32778 it works perfectly.
Now, In my Python script I do:
from ib_insync import *
ib = IB()
# use this instead for IB Gateway
ib.connect('192.168.2.6:32776', 4002, clientId=1)
The 4002 is a socket port setting inside the gateway.
When I run the script I get "Getaddrinfo failed". Does not make sense to me.
What can be the issue here?
according to API document at https://ib-insync.readthedocs.io/api.html#module-ib_insync.ib
connect use following syntax:
connect(host='127.0.0.1', port=7497, clientId=1, timeout=4, readonly=False, account='')
host (str) – Host name or IP address.
port (int) – Port number.
clientId (int) – ID number to use for this client; must be unique per connection. Setting clientId=0 will automatically merge manual TWS trading with this client.
timeout (float) – If establishing the connection takes longer than timeout seconds then the asyncio.TimeoutError exception is raised. Set to 0 to disable timeout.
readonly (bool) – Set to True when API is in read-only mode.
account (str) – Main account to receive updates for.
so your code:
# use this instead for IB Gateway
ib.connect('192.168.2.6:32776', 4002, clientId=1)
should be changed to:
# use this instead for IB Gateway
ib.connect('192.168.2.6', 32776, clientId=1)
First, just for testing, try and use port 4001 directly:
ib.connect('192.168.2.6:32776', 4002, clientId=1)
Second, check your IB socat service is running, since it is that service which establishes two bidirectional byte streams and transfers data between 4001 and 4002:
echo "Starting Interactive Brokers Controller" | info
exec socat TCP-LISTEN:4001,fork TCP:127.0.0.1:4002 2>&1 | info
The Dockerfile registers it.
Try and add a mapping for port 4002.
Related
When I launch a container using the docker SDK for Python, I can specify the host port as None so that the SDK will pick a random available port for me:
import docker
client = docker.from_env()
container = client.containers.run(
"bfirsh/reticulate-splines",
ports={6379 : None} # SDK chooses a host port
detach=True)
The problem is that I want to know after the run command which host port did the SDK choose. How do I do that?
You need to reload the container then use container.ports
import docker
client = docker.from_env()
container = client.containers.run(
"bfirsh/reticulate-splines", ports={6379: None}, detach=True
)
container.reload() # need to reload
print(container.ports)
Output
{'6379/tcp': [{'HostIp': '0.0.0.0', 'HostPort': '53828'}]}
This is only in version greater than 4.0.2 (which is at least 4 years old now)
Commit that added this attribute
Unfortunately a Container object does not have a ports attribute (prior to 3.7.2 as #python_user notes). By printing the attrs dictionary I was able to find out that the host port is contained inside the NetworkSettings attribute. In my case retrieving the host port looks like this:
attrs['NetworkSettings']['Ports']['6379/tcp'][0]['HostPort']
# container port specified at launch ^^^
A more general solution would be to avoid the container port (6379) and search for the 'HostPort' key instead.
I am having some difficulty in connecting to a Centos 7.x server hosted DataStax Cassandra 6.8.
I am able to successfully connect locally inside the Centos Shell and the nodetool status shows the cluster Up and Normal.
Things I tried in cassandra.yaml file -
changed the listen_address parameter from localhost to the IP address of the server. Result -> DSE is not starting.
Commented the listen_address line. Result -> DSE is not starting
Left the parameter of listen_address blank. Result -> DSE in not starting.
as mentioned above -
OS - CentOS 7
DSE Version - 6.8
Install method RPM
Python program -
#cluster = Cluster()
cluster = Cluster(['192.168.1.223'])
# To establish connection and begin executing queries, need a session
session = cluster.connect()
row = session.execute("select release_version from system.local;").one()
if row:
print(row[0])
else:
print("An error occurred.")
Exception thrown from python ->
NoHostAvailable: ('Unable to connect to any servers', {'192.168.1.223:9042': ConnectionRefusedError(10061, "Tried connecting to [('192.168.1.223', 9042)]. Last error: No connection could be made because the target machine actively refused it")})
Both my PC and my server are on the same network and I am able to ping from each other.
Any help is highly appreciated.
Thanks
The same question was asked on https://community.datastax.com/questions/12174/ so I'm re-posting my answer here.
This error indicates that you are connecting to a node which is not listening for CQL connections on IP 192.168.1.223 and CQL port 9042:
No connection could be made because the target machine actively refused it
The 2 most likely causes are:
DSE is not running
DSE isn't listening for client connections on the right IP
You indicated already that you are not able to start DSE. You 'll need to review the logs located in /var/log/cassandra by default for clues as to why it's not running.
The other possible issue is that you haven't configured native_transport_address (rpc_address in open-source Cassandra). You need to set this to an IP address that is accessible to clients (your app) otherwise, it will default to localhost (127.0.0.1).
In cassandra.yaml, configure the node with:
listen_address: private_ip
native_transport_address: public_ip
If you are just testing it on a local network, set both properties to the server's IP address. Cheers!
[EDIT] I just saw your conversation with #Alex Ott. I'm posting my response here because it won't fit in a comment.
This startup error means that the node couldn't talk to any seed nodes so it won't be able to join the cluster:
ERROR [DSE main thread] 2021-08-25 06:40:11,413 CassandraDaemon.java:932 - \
Exception encountered during startup
java.lang.RuntimeException: Unable to gossip with any peers
If you only have 1 node in the cluster, configure the seeds list in cassandra.yaml with the server's own IP address:
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "192.168.1.223"
The redis is running in the ec2 instance via daemon
ps aux | grep redis-server
redis 1182 0.0 0.8 38856 8740 ? Ssl 21:40 0:00 /usr/bin/redis-server 127.0.0.1:6379
netstat -nlpt | grep 6379
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN -
tcp6 0 0 :::6379 :::* LISTEN 1388/redis-server *
Temporarily I have set network inbound to all traffic so it is likely not a problem of network security.
Type Protocol Port Range Source Description
All traffic All All 0.0.0.0/0
But when I try to access from the python side:
import redis
r = redis.StrictRedis(host='ec2-xx-xxx-xx-xx.compute-1.amazonaws.com', port=6379, db=0)
r.set('foo', 'bar')
r.get('foo')
I get the following error:
ConnectionError: Error 111 connecting to
ec2-xx-xxx-xx-xx.compute-1.amazonaws.com:6379. Connection refused.
What may cause such issue despite network inbound is open to all traffic and redis running fine in the ec2 instance ?
Your redis is not listening for connections from outside, instead it listening for local connections only.
tcp 0 0 127.0.0.1:6379
Following works for Ubuntu. Edit /etc/redis/redis.conf and bind it to 0.0.0.0:
# bind 127.0.0.1
bind 0.0.0.0
and then restart the service:
sudo service redis-server restart
WARNING: Do not allow 0.0.0.0 in your security group. Anyone is allowed. Instead allow only specific IPs or CIDRs.
Configuration
LOCAL: A local machine that will create an ssh connection and issue commands on a REMOTE box.
PROXY: An EC-2 instance with ssh access to both LOCAL and REMOTE.
REMOTE: A remote machine sitting behind a NAT Router (inaccessible by LOCAL, but will open a connection to PROXY and allow LOCAL to tunnel to it).
Port Forwarding Steps (via command line)
Create an ssh connection from REMOTE to PROXY to forward ssh traffic on port 22 on the REMOTE machine to port 8000 on the PROXY server.
# Run from the REMOTE machine
ssh -N -R 0.0.0.0:8000:localhost:22 PROXY_USER#PROXY_HOSTNAME
Create an ssh tunnel from LOCAL to PROXY and forward ssh traffic from LOCAL:1234 to PROXY:8000 (which then forwards to REMOTE:22).
# Run from LOCAL machine
ssh -L 1234:localhost:8000 PROXY_USER#PROXY_HOSTNAME
Create the forwarded ssh connection from LOCAL to REMOTE (via PROXY).
# Run from LOCAL machine in a new terminal window
ssh -p 1234 REMOTE_USER#localhost
# I have now ssh'd to the REMOTE box and can run commands
Paramiko Research
I have looked at a handful of questions related to port forwarding using Paramiko, but they don't seem to address this specific situation.
My Question
How can I use Paramiko to run steps 2 and 3 above? I essentially would like to run:
import paramiko
# Create the tunnel connection
tunnel_cli = paramiko.SSHClient()
tunnel_cli.connect(PROXY_HOSTNAME, PROXY_PORT, PROXY_USER)
# Create the forwarded connection and issue commands from LOCAL on the REMOTE box
fwd_cli = paramiko.SSHClient()
fwd_cli.connect('localhost', LOCAL_PORT, REMOTE_USER)
fwd_cli.exec_command('pwd')
A detailed explanation of what Paramiko is doing "under the hood" can be found at #bitprohet's blog here.
Assuming the configuration above, the code I have working looks something like this:
from paramiko import SSHClient
# Set up the proxy (forwarding server) credentials
proxy_hostname = 'your.proxy.hostname'
proxy_username = 'proxy-username'
proxy_port = 22
# Instantiate a client and connect to the proxy server
proxy_client = SSHClient()
proxy_client.load_host_keys('~/.ssh/known_hosts/')
proxy_client.connect(
proxy_hostname,
port=proxy_port,
username=proxy_username,
key_filename='/path/to/your/private/key/'
)
# Get the client's transport and open a `direct-tcpip` channel passing
# the destination hostname:port and the local hostname:port
transport = proxy_client.get_transport()
dest_addr = ('0.0.0.0', 8000)
local_addr = ('127.0.0.1', 1234)
channel = transport.open_channel("direct-tcpip", dest_addr, local_addr)
# Create a NEW client and pass this channel to it as the `sock` (along with
# whatever credentials you need to auth into your REMOTE box
remote_client = SSHClient()
remote_client.load_host_keys(hosts_file)
remote_client.connect('localhost', port=1234, username='remote_username', sock=channel)
# `remote_client` should now be able to issue commands to the REMOTE box
remote_client.exec_command('pwd')
Is the point solely to bounce SSH commands off PROXY or do you need to forward other, non SSH ports too?
If you just need to SSH into the REMOTE box, Paramiko supports both SSH-level gatewaying (tells the PROXY sshd to open a connection to REMOTE and forward SSH traffic on LOCAL's behalf) and ProxyCommand support (forwards all SSH traffic through a local command, which could be anything capable of talking to the remote box).
Sounds like you want the former to me, since PROXY clearly already has an sshd running. If you check out a copy of Fabric and search around for 'gateway' you will find pointers to how Fabric uses Paramiko's gateway support (I don't have time to dig up the specific spots myself right now.)
I currently have the problem that I have a server script running on one computer as localhost:12123. I can connect to it using the same computer but using another computer in the same network does not connect to it (says it does not exist). Firewall is disabled.
Does it have to do with permissions?
The socket is created by a python file using BaseHTTPServer.
It probably has to do with binding to localhost, instead to the actual LAN interface (e.g. 192.168.1.x) or all interfaces (sometimes referred as 0.0.0.0).
This code would start an instance that binds to all interfaces (not only localhost)
def run(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('0.0.0.0', 12123)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
server_adress has to be (0.0.0.0, 12123) see: 0.0.0.0
Bind to 0.0.0.0 or the outside IP address instead, obviously.