How do I access pyftpdlib FTPS server on Amazon EC2? - python

I am trying to create a simple FTPS server on my Ubuntu Amazon EC2 instance using the Python library pyftpdlib.
Here is the code straight from the documentation:
#!/usr/bin/env python
"""
An RFC-4217 asynchronous FTPS server supporting both SSL and TLS.
Requires PyOpenSSL module (http://pypi.python.org/pypi/pyOpenSSL).
"""
from pyftpdlib.servers import FTPServer
from pyftpdlib.authorizers import DummyAuthorizer
from pyftpdlib.contrib.handlers import TLS_FTPHandler
import os
def main():
authorizer = DummyAuthorizer()
authorizer.add_user('ubuntu', '*****', os.getcwd(), perm='elradfmw')
authorizer.add_anonymous('.')
handler = TLS_FTPHandler
handler.certfile = 'keycert.pem'
handler.authorizer = authorizer
handler.masquerade_address = '52.23.244.142'
# requires SSL for both control and data channel
handler.tls_control_required = True
handler.tls_data_required = True
handler.passive_ports = range(60000, 60099)
server = FTPServer(('', 21), handler)
server.serve_forever()
if __name__ == '__main__':
main()
When I run the script on my Amazon EC2 instance and when I try to connect remotely using FileZilla, I get:
Status: Connecting to 52.23.244.142:21...
Status: Connection established, waiting for welcome message...
Response: 220 pyftpdlib 1.4.0 ready.
Command: AUTH TLS
Response: 234 AUTH TLS successful.
Status: Initializing TLS...
Status: Verifying certificate...
Command: USER ubuntu
Status: TLS/SSL connection established.
Response: 331 Username ok, send password.
Command: PASS *****
Response: 230 Login successful.
Command: OPTS UTF8 ON
Response: 501 Invalid argument.
Command: PBSZ 0
Response: 200 PBSZ=0 successful.
Command: PROT P
Response: 200 Protection set to Private
Command: OPTS MLST type;perm;size;modify;unix.mode;unix.uid;unix.gid;
Response: 200 MLST OPTS type;perm;size;modify;unix.mode;unix.uid;unix.gid;
Status: Connected
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/" is the current directory.
Command: TYPE I
Response: 200 Type set to: Binary.
Command: PASV
Response: 227 Entering passive mode (52,23,244,142,174,172).
Command: MLSD
Response: 150 File status okay. About to open data connection.
Error: Connection timed out
Error: Failed to retrieve directory listing
I think I am missing something. Can I get some help?

Your server must present its external IP address in the response to the PASV command. You instead present an internal IP address within the EC2 private network, to which the FileZilla obviously cannot connect to.
While FileZilla can workaround that:
Server sent passive reply with unroutable address. Using server address instead.
other FTP clients (like the Windows command-line ftp.exe) cannot.
Use the handler.masquerade_address to configure the external IP address:
handler.masquerade_address = '52.23.244.142'
FileZilla cannot connect to the port 50048 (195 << 8 + 128). You have probably not opened ports in an FTP passive mode port range in the EC2 firewall.
See Setting up FTP on Amazon Cloud Server (particularly section "Step #2: Open up the FTP ports on your EC2 instance" in the best answer).
To avoid opening whole unprivileged port range, limit the FTP server to use a smaller port range using handler.passive_ports:
handler.passive_ports = range(60000, 60099)
For a general information see my article about the network setup in respect to FTP passive (and active) connection modes.

Related

MQRC_SSL_INITIALIZATION_ERROR with PyMQI (however it connects successfully with c application amssslc)

I am using the same computer local windows 7 computer with MQ Client 9.0.4.0, when trying to connect to the server with amqssslc I can successfully connect to the QMGR (connected like so), however, when I try to connect with PyMQI, I get the following error,
MQI Error. Comp: 2, Reason 2393: FAILED: MQRC_SSL_INITIALIZATION_ERROR
The code that I am using is the following,
import pymqi
import logging
import sys, codecs, locale
logging.basicConfig(level=logging.INFO)
queue_manager = 'QMGR'
channel = 'channel'
host = 'server.com'
port = '1414'
conn_info = '%s(%s)' % (host, port)
ssl_cipher_spec = str.encode('TLS_RSA_WITH_AES_256_CBC_SHA256')
key_repo_location = str.encode('T:/Desktop/certificates/key')
message = 'TEST Message'
channel = str.encode(channel)
host = str.encode(host)
conn_info = str.encode(conn_info)
cd = pymqi.CD(Version=pymqi.CMQXC.MQCD_VERSION_11)
cd.ChannelName = channel
cd.ConnectionName = conn_info
cd.ChannelType = pymqi.CMQC.MQCHT_CLNTCONN
cd.TransportType = pymqi.CMQC.MQXPT_TCP
cd.SSLCipherSpec = ssl_cipher_spec
cd.CertificateLabel = 'edgar'.encode()
sco = pymqi.SCO()
sco.KeyRepository = key_repo_location
qmgr = pymqi.QueueManager(None)
qmgr.connect_with_options(queue_manager, cd, sco)
However when using amqssslc, that comes when installing the MQ Client on my machine, it does work without any errors and connects successfully.
The error from the AMQERR01 log file says the following,
AMQ9716E: Remote SSL certificate revocation status check failed for channel
'channel_name'.
EXPLANATION:
IBM MQ failed to determine the revocation status of the remote SSL certificate
for one of the following reasons:
(a) The channel was unable to contact any of the CRL servers or OCSP responders
for the certificate.
(b) None of the OCSP responders contacted knows the revocation status of the
certificate.
(c) An OCSP response was received, but the digital signature of the response
could not be verified.
I can't change my mqclient.ini config file, since it is locked by not having admin rights (company policies). What I find weird, is that amqssslc works, when they are both using the same mqclient file. I have also tried setting up the path of the MQCLTNFC to another folder including a different config file without success.
Any help would be truly appreciated!

grpc client dns resolution failed when trying to access grpc server on same network

I'm trying to call a GRPC server running on a .Net Core project from a Python client.
When running against localhost:5001 it works fine, but running against the actual IP of the machine from within the same network like 192.168.1.230:5001 it doesn't work and I get an error DNS resolution failed.
I've downloaded the SSL cert and am at the moment reading it as a file from the client. It works when running against localhost so I don't think that is the problem.
Is there a better way to do this kind of testing of having clients run on separate devices but on the same network as the server? Hosting the GRPC server outside during development doesn't really seem like the best solution.
Python code:
import grpc
import datamessage_pb2 as datamessage
import datamessage_pb2_grpc as datamessageService
def main():
print("Calling grpc server")
with open("localhost.cer", "rb") as file:
cert = file.read()
credentials = grpc.ssl_channel_credentials(cert)
channel = grpc.secure_channel("https://192.168.1.230:5001", credentials)
# channel = grpc.secure_channel("localhost:5001", credentials)
stub = datamessageService.StationDataHandlerStub(channel)
request = datamessage.StationDataModel(
temperature=22.3, humidity=13.3, soilMoisture=35.0)
result = stub.RegisterNewStationData(request)
print(result)
main()
Server settings in Program.cs:
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseUrls("https://*:5001");
webBuilder.UseStartup<Startup>();
});
Settings in firewall:
Traceback:
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed"
debug_error_string = "{"created":"#1576101634.549000000","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3934,"referenced_errors":[{"created":"#1576101634.549000000","description":"Resolver transient failure","file":"src/core/ext/filters/client_channel/resolving_lb_policy.cc","file_line":262,"referenced_errors":[{"created":"#1576101634.549000000","description":"DNS resolution failed","file":"src/core/ext/filters/client_channel/resolver/dns/native/dns_resolver.cc","file_line":202,"grpc_status":14,"referenced_errors":[{"created":"#1576101634.549000000","description":"OS Error","file":"src/core/lib/iomgr/resolve_address_windows.cc","file_line":96,"os_error":"No such host is known.\r\n","syscall":"getaddrinfo","wsa_error":11001}]}]}]}"
In Python gRPC client, calling channel without protocol (https:) is required. So, I called gRPC service in dotnet core framework with following and it worked. Note, dotnet gRPC server was listening on https://localhost:5001.
with open('localhost.crt', 'rb') as f:
credentials = grpc.ssl_channel_credentials(f.read())
with grpc.secure_channel('localhost:5001', credentials) as channel:
stub = pb2_grpc.GreeterStub(channel)
request = pb2.HelloRequest(name = "GreeterPythonClient")
response = stub.SayHello(request)

How to execute remote commands via SSH through authenticated HTTP Proxy?

I am posting the question and the answer, I found, as well, incase it would help someone. The following were my minimum requirements:
1. Client machine is Windows 10 and remote server is Linux
2. Connect to remote server via SSH through HTTP Proxy
3. HTTP Proxy uses Basic Authentication
4. Run commands on remote server and display output
The purpose of the script was to login to the remote server, run a bash script (check.sh) present on the server and display the result. The Bash script simply runs a list of commands displaying the overall health of the server.
There have been numerous discussions, here, on how to implement HTTP Proxy or running remote commands using Paramiko. However, I could not find the combination of both.
from urllib.parse import urlparse
from http.client import HTTPConnection
import paramiko
from base64 import b64encode
# host details
host = "remote-server-IP"
port = 22
# proxy connection & socket definition
proxy_url = "http://uname001:passw0rd123#HTTP-proxy-server-IP:proxy-port"
url = urlparse(proxy_url)
http_con = HTTPConnection(url.hostname, url.port)
auth = b64encode(bytes(url.username + ':' + url.password,"utf-8")).decode("ascii")
headers = { 'Proxy-Authorization' : 'Basic %s' % auth }
http_con.set_tunnel(host, port, headers)
http_con.connect()
sock = http_con.sock
# ssh connection
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(hostname=host, port=port, username='remote-server-uname', password='remote-server-pwd', sock=sock)
except paramiko.SSHException:
print("Connection Failed")
quit()
stdin,stdout,stderr = ssh.exec_command("./check")
for line in stdout.readlines():
print(line.strip())
ssh.close()
I would welcome any suggestions to the code as I am a network analyst and not a coder but keen to learn and improve.
I do not think that your proxy code is correct.
For a working proxy code, see How to ssh over http proxy in Python?, particularly the answer by #tintin.
As it seems that you need to authenticate to the proxy, after the CONNECT command, add a Proxy-Authorization header like:
Proxy-Authorization: Basic <credentials>
where the <credentials> is base-64 encoded string username:password.
cmd_connect = "CONNECT {}:{} HTTP/1.1\r\nProxy-Authorization: Basic <credentials>\r\n\r\n".format(*target)

python websocket failure when run in openshift

I have a autobahn twisted websocket running in python which is working in a dev vm correctly but I have been unable to get working when the server is running in openshift.
Here is the shortened code which works for me in a vm.
from autobahn.twisted.websocket import WebSocketServerProtocol, WebSocketServerFactory, listenWS
from autobahn.twisted.resource import WebSocketResource
class MyServerProtocol(WebSocketServerProtocol):
def onConnect(self, request):
stuff...
def onOpen(self):
stuff...
def onMessage(self,payload):
stuff...
factory = WebSocketServerFactory(u"ws://0.0.0.0:8080")
factory.protocol = MyServerProtocol
resource = WebSocketResource(factory)
root = File(".")
root.putChild(b"ws", resource)
site = Site(root)
reactor.listenTCP(8080, site)
reactor.run()
The connection part of the client is as follows:
var wsuri;
var hostname = window.document.location.hostname;
wsuri = "ws://" + hostname + ":8080/ws";
if ("WebSocket" in window) {
sock = new WebSocket(wsuri);
} else if ("MozWebSocket" in window) {
sock = new MozWebSocket(wsuri);
} else {
log("Browser does not support WebSocket!");
window.location = "http://autobahn.ws/unsupportedbrowser";
}
The openshift configuration is as follows:
1 pod running with app.py listening on port 8080
tls not enabled
I have a non-tls route 8080 > 8080.
Firefox gives the following message in the console:
Firefox can’t establish a connection to the server at ws://openshiftprovidedurl.net:8080/ws.
when I use wscat to connect to the websocket.
wscat -c ws://openshiftprovidedurl.net/ws
I get the following error:
error: Error: unexpected server response (400)
and the application log in openshift shows the following:
2018-04-03 01:14:24+0000 [-] failing WebSocket opening handshake ('missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False)')
2018-04-03 01:14:24+0000 [-] dropping connection to peer tcp4:173.21.2.1:38940 with abort=False: missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False)
2018-04-03 01:14:24+0000 [-] WebSocket connection closed: connection was closed uncleanly (missing port in HTTP Host header 'openshiftprovidedurl.net' and server runs on non-standard port 8080 (wss = False))
Any assistance would be appreciated!
Graham Dumpleton hit the nail on the head, I modified the code from
factory = WebSocketServerFactory(u"ws://0.0.0.0:8080")
to
factory = WebSocketServerFactory(u"ws://0.0.0.0:8080", externalPort=80)
and it corrected the issue. I had to modify my index to point to the correct websocket but I am now able to connect.
Thanks!
Based on the source code of autobahn-python, you can get that message only in 2 cases.
Here is the implementation:
if not ((self.factory.isSecure and self.factory.externalPort == 443) or (not self.factory.isSecure and self.factory.externalPort == 80)):
return self.failHandshake("missing port in HTTP Host header '%s' and server runs on non-standard port %d (wss = %s)" % (str(self.http_request_host), self.factory.externalPort, self.factory.isSecure))
Because I think you are using Deployment + Service (and maybe Ingress on top of them) for your server, you can bind your server to port 80 instead of 8080 and set that port in Service and in Ingress, if you are using them.

Connecting to an FTP server on IPv6 in python

That's how I programmatically Connect to an FTP server:
Python code
ftp = ftplib.FTP (settings.FTP_IP)
ftp.login (settings.FTP_LOGIN, settings.FTP_PASS)
# ...
# here I upload files to the server
# ...
ftp.quit ()
But just as things with IPv4. But how to connect to the server via IPv6?
I watched some liby, tried to put them in the shell, connect, but alas, it did not work.
Tell me if anyone has dealt with this.
After looking at the code of ftplib.py, it seems to me that the code is absolutely ready for IPv6.
The library knows about EPSV and EPRT and uses them where appropriate.
E.g.,
def makepasv(self):
if self.af == socket.AF_INET:
host, port = parse227(self.sendcmd('PASV'))
else:
host, port = parse229(self.sendcmd('EPSV'), self.sock.getpeername())
return host, port
shows that it sends a PASV or an EPSV depending on which IP version we use.

Categories