Actually we are using python3.6.8 in our server, we are trying to connect the ftp server and pushing files to the ftp server through an api call, here when we try to push the files from local it is running fine and files are being pushed but when calling api to the server it is redirecting to a 502 bad gateway error after 14.8s time when tried with postman. the server we use is AWS EC2
ftp = ftplib.FTP()
host = config.FTP_HOST
port = 21
ftp.connect(host, port)
try:
ftp.login(config.FTP_USERNAME, config.FTP_PASSWORD)
file = open(path_image, 'rb')
ftp.cwd("/DailyDump/target/")
ftp.storbinary("STOR sample_file_name" + str(yesterday_date) + ".csv", file)
file.close()
ftp.close()
except:
pass
This problem was caused due to the maximum timeout reached on the API call, hence I transferred the codes from API to stand alone code to make it run for longer time period without giving error. so there is no error in ftp logging.
Related
I am trying to upload a file to an ftp server on my same wifi network to get a picture on to a digital picture frame. I succeeded in uploading through file explorer, but when uploading using a python script I get a 530 response.
Here is the code so far
import ftplib
ftp = ftplib.FTP()
ftp.connect("111.111.1.11", 1111) #dummy host and port
file = open('C:/path/to/file/test1.png','rb')
ftp.storbinary('test.png', file)
file.close()
ftp.quit()
The server does not requre me to log in with a username and password on file explorer, is there some sort of default I need?
530 error code means Authentication failed error so you are missing the authentication piece. Maybe you can do something like this:
ftp = FTP(source_address=("111.111.1.11", 1111))
ftp.login(user, password)
Note that if you don't provide a user and password it will login with:
user anonymous
password anonymous
as described here
Also I would recommend you reading about S-FTP (Secure FTP) because in FTP the credentials are passed in clear text in the login request.
S-FTP is a communication protocol similar to FTP but built on top of ssh.
Hope this helped you !
I have a very basic Python code that checks if an array of ports are open on a load balancer DNS. So, I am using the conventional SOCKET to check it. When I tried executing the code on my local machine, it was running good and giving the expected output. When I tried to deploy the same logic over Lambda, I am getting a timed out error.
My local code:
import socket
DNS = ['loadbalancer-dns.elb.amazonaws.com']
PORT = [8099,9087,10041,10004,5001,3001,4001,10010,8085,9050,8088,8081,10041,8086,8072,10025,20026,10006,9098,9099,10005,8070]
for iDNS in DNS:
for iport in PORT:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
output = sock.connect_ex((iDNS,iport))
if output == 0:
print(f'Port {iport} is open on {iDNS}')
else:
print(f'Port {iport} is closed on {iDNS}')
sock.close()
My Lambda function code:
import json
import boto3
import socket
PORT = [8099,9087]
DNS = ['loadbalancer-dns.elb.amazonaws.com']
def lambda_handler(event, context):
try:
for iDNS in DNS:
for iport in PORT:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
output = sock.connect_ex((iDNS,iport))
if output == 0:
print(f'Port {iport} is open on {iDNS}')
else:
print(f'Port {iport} is closed on {iDNS}')
sock.close()
except:
print('Task timed out')
My Python version in Lambda is Python 3.8 and my Timeout value is set to 1 minute 30 seconds.
Found the reason to the problem. My loadbalancer is an internal load balancer that serves the applications running in Private subnets. There has been a VPN configured to my VPC. So after attaching the VPC to my Lambda function, it worked.
Timeout errors can be attributed to several issues.
Insufficient processing power. While your code on the first glance isn't very heavy, it still is of O(N2) complexity. In case you expand it in future, keep this in mind. This is solved by allocating more memory to the function, which will also allocate more CPU proportionally.
Function just needs more time. Can be related to 1st point. You can try using more detailed logging to see where the operation was just before the timeout.
Networking problems (most likely). You can use socket timeout to force error on the resource which is unreachable. This can show you that there is some sort of AWS "firewall" block between lambda and the endpoint (security group, NACL etc)
I have a simple script that successfully downloads a 75MB file over FTP:
try:
ftp = ftplib.FTP(host)
ftp.login(username, password)
ftp.cwd(source_dir)
except ftplib.all_errors as e:
print('Ftp error = ', e)
return False
# Check filename exists
if filename in ftp.nlst():
local_filename = os.path.join(dest_dir, filename)
lf = open(local_filename, "wb")
ftp.retrbinary("RETR " + filename, lf.write)
lf.close()
print(filename, ' successfully downloaded')
else:
print(filename, ' not found in the path ', source_dir)
ftp.quit()
This script works fine on both my home and work laptops when run from Spyder IDE or a Windows scheduled task.
I have deployed the exact same script to a Windows Virtual Machine on Azure.
Files less than 10MB seem to download ok.
Files larger than 30MB return an exception:
421 Data timeout. Reconnect. Sorry.
I get around 700 Mbps on Azure and only around 8Mbps on my home network.
It looks like a timeout. I can see the file is partially downloaded.
I tried setting ftp.set_pasv(False), but this then returns me 500 Illegal Port, which is to be expected. I understand passive is the preferred approach anyhow.
What else can I do to troubleshoot and resolve this issue?
Just some suggestions for you.
According to the wiki page for File Transfer Protocol, FTP may run in active or passive mode, as the figure below. In active mode, the client requires a listening port for incoming data from the server. However, due to the listening port of client for FTP server is random assigned, you can not prepare in advance to add the port in NSG inbound rules. So you should use passive mode in the client side on Azure VM with FTP.set_pasv(True) or without FTP.set_pasv(False).
For the issue 421 Data timeout. Reconnect. Sorry., please check the timeout setting in your FTP server, such as the data_connection_timeout property of vsftpd.conf file of vftp, to set enough long value of time out
Try to set a timeout value longer then the global default setting for ftplib.FTP(host='', user='', passwd='', acct='', timeout=None, source_address=None) function.
Try to use function FTP.set_debuglevel(level) to debug output more details for your script to find out the possible reason.
Hope it helps.
i built a simple DNS server.. and i'm just trying to print the data (the whole packet) but my server stuck at the recvfrom part.
i tried to open a file that i got as adminstor which changes my DNS server to 127.0.0.1 but it doesn't work.
this is my code:
i tried to write some url's on my browser but my server doesn't get nothing and stuck at the recv.
import socket
myserver = sokcet.socket(socket.AF_INET, socket.SOCKET_DGRAM)
myserver.bind(('0.0.0.0',53))
data, addr = myserver.recvfrom(1024)
print data
Below is the code I am running within a service. For the most part the script runs fine for days/weeks until the script hiccups and crashes. I am not so worried about the crashing part as I can resolve the cause from the error logs an patch appropriately. The issue I am facing is that sometimes when the service restarts and tries to connect to the server again, it gets a (10061, 'Connection refused') error, so that the service is unable to start up again. The bizarre part is that there is no python processes running when connections are being refused. IE no process with image name "pythonw.exe" or "pythonservice.exe." It should be noted that I am unable to connect to the server with any other machine as well until I reset computer which runs the client script. The client machine is running python 2.7 on a windows server 2003 OS. It should also be noted that the server is coded on a piece of hardware of which I do not have access to the code.
try:
EthernetConfig = ConfigParser()
EthernetConfig.read('Ethernet.conf')
HOST = EthernetConfig.get("TCP_SERVER", "HOST").strip()
PORT = EthernetConfig.getint("TCP_SERVER", "PORT")
lp = LineParser()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
reader = s.makefile("rb")
while(self.run == True):
line = reader.readline()
if line:
line = line.strip()
lp.parse(line)
except:
servicemanager.LogErrorMsg(traceback.format_exc()) # if error print it to event log
s.shutdown(2)
s.close()
os._exit(-1)
Connection refused is an error meaning that the program on the other side of the connection is not accepting your connection attempt. Most probably it hasn't noticed you crashing, and hasn't closed its connection.
What you can do is simply sleep a little while (30-60 seconds) and try again, and do this in a loop and hope the other end notices that the connection in broken so it can accept new connections again.
Turns out that Network Admin had the port closed that I was trying to connect to. It is open for one IP which belongs to the server. Problem is that the server has two network cards with two separate IP's. Issue is now resolved.