Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a multi-client server code in which multiple clients are connected to a server.
Now all the clients are sending text messages to the server and are receiving a reply from the server. Now if one client tries to flood the server using a syn flood which the client runs in a different terminal then I want the server to close the connection of the client and the server still continues messaging with the other clients.
In short its how to prevent a DOS attack?
When dealing with a syn flood, you don't do that in Python. A syn flood is one machine sending a large amount of syn packages to your server, leading to an exhaustion of kernel resources.
The attack takes advantage of the fact that connection attempts are fully handled by the kernel. As such, your application will only get notified of a new connection after it was fully created (which will not happen in a syn-flood). Instead, your socket's backlog will quickly fill up and there will be no more connections attempts possible until the half-open connections time out.
As such, you'll have to handle this in the kernel, e.g. by increasing the socket's backlog (note that even half-open connections require some memory unless you use syn-cookies or similar) or by limiting the amount of syn-packets which are allowed to be received e.g. with iptables or other firewalls.
Generally, if the socket's backlog is full, no new connections will be accepted. Existing connections will not be affected by this and will continue to be served. Generally, when receiving a syn-flood, there are other kernel resources seriously strained which means, you might still have problems communicating depending on the actual circumstances.
To say it clear again: handling syn-floods is not a thing you'll be able to handle in Python but have to deal with by properly configuring your kernel.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Purpose:
I'm making a program that will set up a dedicated server (software made by game devs) for a game with minimal effort. One common step in making the server functional is port forwarding by making a port forward rule on a router.
Me and my friends have been port forwarding through conventional means for many years with mixed results. As such I am hoping to build a function that will forward a port on a router when given the internal ip of the router, the internal ip of the current computer,the port and the protocol. I have looked for solutions for similar problems, but I found the solutions difficult to understand since i'm not really familiar with the socket module. I would prefer not to use any programs that are not generally installed on windows since I plan to have this function work on systems other than my own.
Approaches I have explored:
Creating a bat file that issues commands by means of netsh, then running the bat.
Making additions to the settings in a router found under Network -> Network Infrastructure (I do not know how to access these settings programmaticly).
(I'm aware programs such as GameRanger do this)
Using the Socket Module.
If anyone can shed some light how I can accomplish any of the above approaches or give me some insight on how I can approach this problem another way I would greatly appreciate it.
Thank you.
Edit: Purpose
You should read first some sort of informations about UPnP (Router Port-Forwarding) and that it's normally disabled.
Dependent of your needs, you could also try a look at ssh reverse tunnels and at ssh at all, as it can solve many problems.
But you will see that working with windows and things like adavanced network things is a bad idea.
At least you should use cygwin.
And when you really interessted in network traffic at all, wireshark should be installed.
I'm not sure if that's possible, as much as I know, ports aren't actually a thing their just some abstraction convention made by protocols today and supported by your operating system that allows you to have multiple connections per one machine,
now sockets are basically some object provided to you by the operating system that implements some protocol stack and allows you to communicate with other systems, the API provides you some very nice API called the socket API which allows you use it's functionality in order to communicate with other computers, Port forwarding is not an actual thing, it just means that when the operating system of the router when receiving incoming packets that are destined to some port it will drop them if the port is not open, think of your router as some bouncer or doorman, standing in the entrance of a building, the building is your LAN, your apartment is your machine and rooms within your apartment are ports, some package or mail arrives to your doorman under the port X, a port rule means on IP Y and Port X of the router -> forward to IP Z and port A of some computer within the LAN ( provides and implements the NAT/PAT ) so what happens if we'll go back to my analogy is something such as this: doorman receives mail destined to some port, and checks if that port is open, if not it drops the mail if it is it allows it to go to some room within some apartment.. (sounds complex I know apologize) my point is, every router chooses to implement port rules or port blocking a little bit different and there is no standard protocol for doing, socket is some object that allows you program to communicate with others, you could create some server - client with sockets but that means that you'll need to create or program your router, and I'm not sure if that's possible,
what you COULD do is:
every router provides some http client ( web client ) that is used to create and forward ports, maybe if you read about your router you could get access to that client and write some python http script that forwards ports automatically
another point I've forgot is that you need to make sure you're own firewall isn't blocking ports, but there's no need for sockets / python to do so, just manually config it
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am trying to understand how to use websockets correctly and seem to be missing some fundamental part of the puzzle.
Say I have a website with 3 different pages:
newsfeed1.html
newsfeed2.html
newsfeed3.html
When a user goes to one of those pages they get a feed specific to the page, ie newsfeed1.html = sport, newsfeed2.html = world news etc.
There is a CoreApplication.py that does all the handling of getting data and parsing etc.
Then there is a WebSocketServer.py, using say Autobahn.
All the examples I have looked at, and that is alot, only seem to react to a message from the client (browser) within the WebSocketServer.py, think chat echo examples. So a client browser sends a chat message and it is echoed back or broadcast to all connected client browsers.
What I am trying to figure out is given the following two components:
CoreApplication.py
WebSocketServer.py
How to best make CoreApplication.py communicate with WebSocketServer.py for the purpose of sending messages to connected users.
Normally should CoreApplication.py simply send command messages to the WebSocketServer.py as a client. For example like this:
CoreApplication.py -> Connects to WebServerSocket.py as a normal client -> sends a Json command message (like broadcast message X to all users || send message Y to specific remote client) -> WebSocketServer.py determines how to process the incoming message dependant on which client is connected to which feed and sends to according remote client browsers.
OR, should CoreApplication.py connect programatically with WebSocketServer.py? As I cannot seem to find any examples of being able to do this for example with Autobahn or other simple web sockets as once the WebSocketServer is instantiated it seems to run in a loop and does not accept external sendMessage requests?
So to sum up the question: What is the best practice? To simply make CoreApplication.py interact with WebSocketServer.py as a client (with special command data) or for CoreApplication.py to use an already running instance of WebSocketServer.py (both of which are on the same machine) through some more direct method to directly sendMessages without having to make a full websocket connection first to the WebSocketServer.py server?
It depends on your software design, if you decide the logic from WebSocketServer.px and CoreApplication.py belongs together, merge it.
If not, you need some kind of inter process communication (ipc).
You can use websockets for this ipc, but i would suggest, you use something simpler. For example, you can you json-rpc over tcp or unix domain to send control messages from CoreApplication.py to WebSocketServer.py
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I need to constantly access a server to get real time data of financial instruments. The price is constantly changing so I need to request new prices every 0.5 seconds. The REST APIs of the brokers let me do this, however, I have noticed there's quite some delay when connecting to the server. I just noticed that they also have websocket API though. According to what I read, they both have some pros/cons. But for what I want to do and because speed is specially important here, which kind if API would you recommend? Is websocket really faster?
Thank you!
The most efficient operation for what you're describing would be to use a webSocket connection between client and server and have the server send updated price information directly to the client over the webSocket ONLY when the price changes by some meaningful amount or when some minimum amount of time has elapsed and the price has changed.
This could be much more efficient than having the client constantly ask for new price changes and the timing of when the new information gets to the client can be more timely.
So, if you're interested in how quickly the information on a new price level gets to the client, a webSocket can get it there much more timely because the server can just send the new pricing information directly to the client the very moment it changes on the server. Whereas using a REST call, the client has to poll on some fixed time interval and will only ever get new data at the point of their polling interval.
A webSocket can also be faster and easier on your networking infrastructure simply because fewer network operations are involved to simply send a packet over an already open webSocket connection versus creating a new connection for each REST/Ajax call, sending new data, then closing the connection. How much of a difference/improvement this makes in your particular application would be something you'd have to measure to really know.
But, webSockets were designed to help with your specific scenario where a client wants to know (as close to real-time as practical) when something changes on the server so I would definitely think that it would be the preferred design pattern for this type of use.
Here's a comparison of the networking operations involved in sending a price change over an already open webSocket vs. making a REST call.
webSocket
Server sees that a price has changed and immediately sends a message to each client.
Client receives the message about new price.
Rest/Ajax
Client sets up a polling interval
Upon next polling interval trigger, client creates socket connection to server
Server receives request to open new socket
When connection is made with the server, client sends request for new pricing info to server
Server receives request for new pricing info and sends reply with new data (if any).
Client receives new pricing data
Client closes socket
Server receives socket close
As you can see there's a lot more going on in the Rest/Ajax call from a networking point of view because a new connection has to be established for every new call whereas the webSocket uses an already open call. In addition, in the webSocket cases, the server just sends the client new data when new data is available - the client doens't have to regularly request it.
If the pricing information doesn't change super often, the REST/Ajax scenario will also frequently have "do-nothing" calls where the client requests an update, but there is no new data. The webSocket case never has that wasteful case since the server just sends new data when it is available.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I would like to be able to read and write with a USB port from a remote machine as if it were local. I want to do this by writing a python script that establishes a TCP connection to the remote machine and then constantly reads from the USB port and write to the TCP connection and vice versa. What is the best way to code this up in Python simply and quickly?
I had to do the same thing you're asking for a robotics project I had in the past year. We had a Raspberry Pi constantly reading on a USB port linked to an Arduino board, and as soon as it got some message it sent it throught TCP to all the remote clients connected.
The project is called autonomee and is available on github.
To summarize, you have to do the following:
The 'client' connects to the server that is linked to the USB "source"
Have a thread (on the server) constantly reading from the USB (I'd recommend using pyserial or pyusb for that)
When you receive some data, send it throught TCP to the remote client (more on that below)
The remote client keeps listening for data and whenever it gets a message it processes it
The most thought part is the TCP connection, and it's not that hard.
You can either use twisted for a higher level TCP server or just use the standard TCPServer class (we did the latter). Check the examples on the SocketServer doc, they are really useful !
I can't give you much more detail as it highly depends on which kind of data you have to send, at which frequency, but I'd advise you to have a look at the code I've produced for the server and the client
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Recently I am studying email-related and have written a simple mail client to send emails.
But unfortunately due to the bad network I can not connect to smtp.gmail.com home. It is OK when I use a proxy in the browser, also OK when the script is run in company.
So are there any methods to set a proxy for smtp protocol? I don't see anything I can use in the smtplib module in Python2.7. And I think it is of no use to set the http proxy. They are two kinds of protocols. I have also searched Google and stackoverflow and can not find a reasonable resolution.
So, it seems there is a socket proxy. Is that useful?
Hope somebody could tell me something.
Install a local mail server which maintains its own mail queue like Postfix. Your own, local mail server actually is a caching SMTP-proxy, so exactly what you want to do. Your local application will deliver its mail to Postfix which makes sure the mail gets delivered to the actual recipient. There are lots of other mail servers doing this job totally fine, too.
Setting up Postfix is out of scope for an SO-Answer (or ServerFault, I guess it gets migrated), but there are lots of tutorials round there (and it depends on the machine you're using). Setting up postfix to use Gmail as smarthost will be of interest for you, too, as it involves some smaller hassles with certificates.