This question already has an answer here:
How to send CSV file directly to an FTP server
(1 answer)
Closed 3 years ago.
Is there any way to directly send files from one API to another FTP server without downloading them to local in Python 3.
Currently we downloading from one API to local and then sending it to FTP server, want to avoid that hop from data flow by directly sending files to server.
you can use byte data of the file(it will store as in-memory) and pass that to another API.
One of the options would be having another API function (TransferFile, ...), which will transfer data from API server to FTP site. Then you just call that API method from your code without downloading data to the local server.
The FTP protocol has provision for initiating a data transfert between two remote hosts from a third party client. This is called the proxy mode. Unfortunately, most servers disable it for security reasons, because it used to be a very efficient way for DOS attacks.
If you have control on both servers and if both use FTP and if they are not publicly exposed, this can be very efficient.
In any other use case, the data will have to pass through the client. The best that can be done is to open both connections and transfer data to the target host as soon as it has been received from the source without storing it on disk.
Related
I want to send data to the server from MT4 but have two main problems.
1- I want to send data without opening the app because I must use VPS but I won't.
2- On another hand, I want to send operations from my server to my account (without opening the app).
My trading robot algorithms are written in python that is on the server and I need a connection between MT4 and my server without opening the MT4.
I tried using the MT library for python but in this way, I must open the app and use VPS so I won't do this, I want to send data to the server, and after the process is completed sends operations to my account.
Thank you for your help guys ...
I have an SFTP server. I can take data by transferring/downloading files. Is there a way that I can do without downloading files?
My code is as below:
# Connection to the SFTP server
with pysftp.Connection(hostname, username, passowrd, port) as sftp:
with sftp.cd('directory'):
sftp.get('filename.txt')
This code downloads file to my local machine.
Yes and no. You can use the data from the remote (SFTP) server without storing the files to a local disk.
But you cannot use data locally without downloading them. That's impossible. You have to transfer the data to use them – at least to a memory of the local machine.
See A way to load big data on Python from SFTP server, not using my hard disk.
My answer there talks about Paramiko. But pysftp is a just a thin wrapper around Paramiko. Its Connection.open is directly mapped to underlying Paramiko's SFTPClient.open. So you can keep using pysftp:
with sftp.open('filename.txt', bufsize=32768) as f:
# use f as if you have opened a local file with open()
Though I'd recommend you not to: pysftp vs. Paramiko.
I have a project using pandas-python to access data from Postgres using SQLAlchemy createngine function. While I pass the credentials and hostname:portname it throws error and asks me to add the machine IP to pg_conf.hba file on the Postgres server. Which will be cumbersome as I don't have a static IP for my machine and even this project need to be shared with other people and it doesn't make any sense to keep on adding new IPs or making requests with ** IPs as it has sensitive data.
Additional information on the topic revealed that actual issue being the
local address he client is using for sending data when talking to the (database) server:
Your client need to use the local VPN address assigned as source address.
This is achieved by adding in a socket.bind(_source address_) call before the call to socket.connect(_target address_).
Or, more conveniently, just provide the source address parameter with the socket.create_connection(address[, timeout[, source_address]]) call that is setting up the connection to the server.
Imagine you have two python processes, one server and one client, that interact with each other.
Both processes/programs run on the same host and communicate via TCP, eg. by using the AMP protocol of the twisted framework.
Could you think of an efficient and smart way how both python programs can authenticate each other?
What I want to achieve is, that for instance the server only accepts a connection from an authentic client and where not allowed third party processes can connect to the server.
I want to avoid things like public-key cryptography or SSL-connections because of the huge overhead.
If you do not want to use SSL - there are a few options:
Client must send some authentication token (you may call it password) to server as a one of the first bunch of data sent through the socket. This is the simplest way. Also this way is cross-platform.
Client must send id of his process (OS-specific). Then server must make some system calls to determine path to executable file of this client process. If it is a valid path - client will be approved. For example valid path should be '/bin/my_client' or "C:\Program Files\MyClient\my_client.exe" and if some another client (let's say with path '/bin/some_another_app' will try to communicate with your server - it will be rejected. But I think it is also overhead. Also implementation is OS-specific.
I am trying to write a VPN server,that multiple clients can connect to each other on a virtual network.
So i need a threaded server to send and receive data to/from clients concurrently.
A tunnel interface may be created for each client, that represents the client's virtual interface on server.
I have two solutions for using select() function to read/write from/to tunnel on server:
Using a single thread that calls select([tun0,tun1,tun2],[tun0,tun1,tun2],[]) function for all Tunnels, and using buffers to hold-up traffic.
Calling select([tun0],[tun0],[]) function separately on specific client's thread.
My question is: which way is better ?