I'm trying to download some torrent pieces manually.I have the tracker url. How can I connect to the tracker via python and obtain peer info.
I already created simple client using libtorrent. but my isp blocks torrent downloading. I want to undestand what's really happening under the hood.
Related
Hello I am trying to connect to the S3 server and while I’m trying to get the connection state I can’t find any pre defined api that does that. I’m currently using boto3 with python for this. Anyone with any idea how to constantly get the connectivity state to show if it’s connected or disconnected to the S3? it’s for display.
There is no "state". All requests are REST API calls. Once an API call is finished, there is no on-going connection between the systems.
Think of it like sending an email -- once an email has been sent to a server, there is no 'state' between the sending machine and the email server. They have no need for a 'connection state'.
This question already has an answer here:
How to send CSV file directly to an FTP server
(1 answer)
Closed 3 years ago.
Is there any way to directly send files from one API to another FTP server without downloading them to local in Python 3.
Currently we downloading from one API to local and then sending it to FTP server, want to avoid that hop from data flow by directly sending files to server.
you can use byte data of the file(it will store as in-memory) and pass that to another API.
One of the options would be having another API function (TransferFile, ...), which will transfer data from API server to FTP site. Then you just call that API method from your code without downloading data to the local server.
The FTP protocol has provision for initiating a data transfert between two remote hosts from a third party client. This is called the proxy mode. Unfortunately, most servers disable it for security reasons, because it used to be a very efficient way for DOS attacks.
If you have control on both servers and if both use FTP and if they are not publicly exposed, this can be very efficient.
In any other use case, the data will have to pass through the client. The best that can be done is to open both connections and transfer data to the target host as soon as it has been received from the source without storing it on disk.
Send data read from GPiO to server, not only with WIFI, but also on mobile network outside anywhere. I dont have and dont want to deal with setting a public address for RPi2, so this is why I need RPi2 to be client, sending data thru websockets to server on public address. It could be node.js, python, PHP client - server, as long as it is without of the need for a browser at the client side RPi2. Any suggestions ? Thank you.
Additionally, can you please explain how would "server side node.js client" work ?
An (incomplete) list of libraries with websocket support
For Python, you could look at:
Tornado Web
Twisted Matrix
Autobahn
Crossbar.io
For node.js you could look at:
socket.io
web-socket-js
Also, you should consider editing your question. Just ask one at a time. Your second question you should research yourself, a little googling would get a link or two to a nice article explaining what you would like to know.
I would like to make a streaming server with python/twisted, which receives a WebRTC video stream and then applys some OpenCV algorithms to it.
However I cannot find a python module for WebRTC. How can I send and receive a WebRTC video stream with python/twisted?
Thanks!
I have started putting together the basic blocks needed to create a Python WebRTC endpoint.
One is an asyncio-based Interactive Connectivity Establishment module:
https://github.com/jlaine/aioice
Another one is a Python binding for libsrtp:
https://github.com/jlaine/pylibsrtp
We also need SRTP keying support in the OpenSSL bindings:
https://github.com/pyca/cryptography/pull/4099
On top of this, we can then build an asyncio-based WebRTC implementation:
https://github.com/jlaine/aiortc
I have been able to get both Chrome and Firefox to establish an audio and video stream to a Python-based server.
What you can do is take screen shots continuously and push them to a websocket and allow your twisted server to take a gander at each one as it comes in.
I have modified some common recorders and my version takes Jpeg images and pushes them over a websocket. Feel free to use and modify how you want so that it fits your needs. Source code here. The example I use is pushing down to a libwebsocket server built in C but the same javascript could be used to send to any websocket server.
I've had a similar issue and ended up creating a server that launches a headless chrome instance from which I can access the WebRTC streams, record chunks with a MediaRecorder and finally forward those chunks on via a WebSocket.
I'd love a python based solution so I wouldn't need the intermediary server launching headless chrome instances but haven't been able to find one.
I've been using Node.js and Puppeteer but one could launch the browser instances from your python server and then send the decoded data back via plain old sockets or whatever else tickles your fancy.
I have built a messaging/chat application for my local network (all WINDOWS) using pyzmq and pyqt for UI, it is based on the majordomo pattern. It's setup this way:
each machine on the network has a client/worker pair
they connect to a 'server' broker via pyzmq and register sessions
sessions are broadcasted by 'server' broker to clients
when 'sender' client sends a message to a specific session, broker routes the message to the corresponding worker destination, a reply is generated by worker, and it gets routed by the broker back to the 'sender' client (ending loop, confirming delivery)
Everything is working well, text messages are formed in 'client' pyqt UI and received by 'worker'pyqt UI.
Now I'm looking to build upon this skeleton to add video chat to my application... I have been looking into webRTC and would like to find a way to implement it.
This is how webRTC works From what I gather (could be severely wrong here, please correct me):
Machine A's Chrome browser opens local video/audio stream from webcam/mic via javascript function
webkitGetUserMedia, then creates a (Machine A) URL for the stream via javascript function webkitURL
Sends (Machine A) URL to Machine B's Chrome browser via signaling server
Machine B's Chrome browser accepts and loads (Machine A) URL, sets up it's own local video/audio stream from webcam.mic via previously mentioned javascript functions and replies with a (Machine B) URL back to Machine A via signaling server
Machine A's Chrome browser is displaying (Machine B) video/audio | Machine B's Chrome browser is displaying (Machine A) video/audio
Is that the process? or is this a totally wring assumption of how peers connect to each other?
If Correct , I would like to adapt my current pyzmq application to act as a signaling server for creating connections between machines, Since IP addresses of my machines are known to me and I can configure my firewall to give access to needed ports I'm trying to eliminate any extra STUN/TURN servers for this setup, I am not planning to go outside of my LAN and access remote machines. And I would like to handle everything(as much as possible) with Python and included batteries(Avoiding Node.js).
So the main question is how should I go about integrating webRTC to my setup? Does webRTC need specific prerequisite libraries or API to be built and running on the signaling server or peer machines? any code examples/advice/links would be appreciated.