Sending a chat message to a LAN game with Python - python

This might be a confusing question...
But take for example the game Battlefield2.
Is it in any way possible to send chat messages to the game through Python? The reason I am asking this is because I've seen messages appear from no where on various games, and want to know if it's possible. Now, the way I understand a packet is like this...
It's a small collection of data with a sender IP address and a recipient IP address. Contained within the packet is the data. So, in theory... If the 'packet name' for the chat message is like:
SRV_CHAT|<Sender_Info>|<Message Text>
(For example): SRV_CHAT|10.1.1.5,Player1|hello, how's the game?
Does that mean that I can create a Python script to send my own message to a LAN game and have it appear? If it is possible, how can I go about dissecting the actual packet information in order to discover any parameters? What I mean by this, is finding that data contained in there...
It's just a thought and a question I've had for a long time.
Thanks!

Yes, it is technically possible. But not very feasible.
What you said about packets is essentially correct, and you can read more about that here.
However, beyond packets, there are entire networks of protocols that determine where they go, and who receives them, you can read about that here.
This means that if you were to manage to emulate the connection to the games server, you could possibly send data to that server, but most likely the server does not support random connections simply sending messages, and the server expects to receive connections which are made by other game clients, and unless you can emulate the clients responses and requests correctly, it will probably not work.
This means that your idea to use Python to directly connect to a server and send messages in the format similar to what you suggested would not work.
However. The server most definitely supports server messages, messages that are sent FROM the server itself into the game, usually if someone hosts a game for instance, they can send messages as the game server (host) of the game. Or, if it is a dedicated server, they might not be in the game itself and send messages to the players in the game through a management console.
It is most likely that many 'plugins' or methods to hook into this control are available, meaning that you could send a message to the application running the game server, telling it that it should send a message into the game.
This would be less emulation and more implementation of the games management system.
It is important to note, that many games are different, and their operation is different, meaning this may or may not work, depending on what options are available. It is my experience that games like Battlefield2 (which I have not played) usually have these tools builtin.
Specifically with BattleField2, there are some links I found that you might like.
http://www.gamefront.com/files/listing/pub2/Battlefield_2/Official_Server_Files/Dedicated_Server/
http://www.bf2cc.com/

Related

Efficient way to send results every 1-30 seconds from one machine to another

Key points:
I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another.
The first machine is catching those values through sensors connected to it.
The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts.
How would you do this and why?
Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up!
Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it.
Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service.
This would be a most secure way, if other people will be connecting to nginx to view sites etc.
Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection.
Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet.
Python script may wait for it there.
Or you write a socket server from scratch in Python (not extra hard) to wait for your packets.
The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery.
No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data.
Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data.
Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much.
If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed).
As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
The simplest thing that could possibly work would be to take your N floats, convert them to a binary message using struct.pack(), and then send them via a UDP socket to the target machine (if it's on a single LAN you could even use UDP multicast, then multiple receivers could get the data if needed). You can safely send a maximum of 60 to 170 double-precision floats in a single UDP datagram (depending on your network).
This requires no application protocol, is easily debugged at the network level using Wireshark, is efficient, and makes it trivial to implement other publishers or subscribers in any language.

Designing a TCP/IP command interface for a robot

Introduction
I am working on a robotic sampling application. Each robot has a cabled interface with power, TCP/IP and a gas-sensor tubing. The robots are on an ARM platform, and I intend to do most of the programming in Python. The robots move slowly and there is nothing computationally intensive running on them.
Each robot should perform these "services":
Move left/right (for manual control)
Move up/down (for manual control)
Go to next sector
Each robot report these sensor readings or events:
Temperature
End-switch right
Docked at sector with ID: ###
Encoder count [lateral,longitudinal]
Error event
Client - Server Architecture
I view each robot as a client, and the sensor hub computer as the server.
The server will have a known ip and listening port, and allow the robots to connect.
The server will do the measurement scheduling, and command the robots to move from sector to sector.
The server may maintain and update a model of each robot with a state-vector containing:
[ position, switches, sensor-readings, status]
Questions
From debugging serial communication, I have experienced the benefits of having a human readable communication interface with a strict poll-response structure. I am however not sure how we should go forth designing this interface.
Are there any best practices in designing communication interfaces for devices like these?
Should I think about packet loss and corruption, or is this fully handled by TCP?
Should I design everything as services polled by the server, or should the robots broadcast it's sensor readings and events?
Should I implement Acknowledgment of commands, e.g. go-to-next-section
I apologize for the broad and vague problem formulation, this may be more a philosophy question than a software problem. However I will greatly appreciate your thoughts, experiences and advice.
TLDR
What are the guiding principles of designing TCP communication protocols for client-server architectures?
Overall Id suggest using python Twisted to build your server and client (robot side) applications (https://twistedmatrix.com/trac/). Anyways to answer your question:
"Are there any best practices in designing communication interfaces for devices like these?"
See answers to your other questions bellow.
"Should I think about packet loss and corruption, or is this fully handled by TCP?"
TCP guarantees the integrity of the data you are getting. The primary things to worry about is if the client/servers are connected or not. You can use a ReconnectingClientProtocol to make your connections a little more robust when server is restarted (see Twisted specs). Also be aware that TCP is a streaming protocol (you may not get the whole message at once), so make sure you got the whole message before acting on it. If you are sending messages quickly you may also have more than one message in your TCP buffer for that client.
"Should I design everything as services polled by the server, or should the robots broadcast it's sensor readings and events?"
Avoid polling. When the robots start up they should establish a persistent TCP connection with the server. Messages should be sent and received (handled) asynchronously.
"Should I implement Acknowledgment of commands, e.g. go-to-next-section"
Wouldn't hurt. Would be good for flow control within your application as well as recovering from situations where the server or robots are restarted and you can't be sure whether a message was processed or not.
"What are the guiding principles of designing TCP communication protocols for client-server architectures?"
Probably the thing to do for your app is to design a simple command response protocol. Start by designing simple message sets, on going from client to server, the other from server to client. You could use simple human readable XML message set as follows:
Server to Client
<SCMessage type="TurnRight"></SCMessage>
<SCMessage type="TurnLeft"></SCMessage>
<SCMessage type="NextSector"><param key="sectorName" value="B"/></SCMessage>
<SCMessage type="GetStatus"></SCMessage>
<SCMessage type="Ack"></SCMessage>
Client to Server
<SCMessage type="SensorUpdate"><param key="data" value="123"/></SCMessage>
<SCMessage type="StatusChanged"><param key="status" value="Good"/></SCMessage>
....
<SCMessage type="Ack"></SCMessage>
So when parsing these messages you can teas them apart by looking for the SCMessage start stop tags. Once you have a message you could then use an XML parser to parse the messages contents. Alternatively you could use JSON which would actually probably be a lot easier (basically you'd be sending little dictionaries back and forth).
You've got a lot of reading to-do ;) Id start by reading up on python Twisted a bit and make little toy programs to get comfortable with things.

Man in the middle - proxy - too slow

I've developed a simple transparent proxy script in Python that changes a string in the server-client traffic.
Source Here
It was used with SMTP and now I have changed it to other application.
My problem is that after it changes the 1st defined string, I need it to stop processing the messages, only pass them to the correct host.
Or...If there's any other software that does this, it would be great.
Not that I don't trust my own code, but something well tested, and with better performance would be great.
Also, keep in mind, the original code would handle little traffic, just a few e-mails, this new one, will handle a lot, several SOAP messages passing around between the hosts, and with multiple clients.
It's working, but, every now and then I get a "server disconnected" message in the client app.
(Sorry my lousy english, will update the post if needed, to better understanding)

Clustering TCP servers, so can send data to all clients

Important note:
I've asked this question already on ServerFault: https://serverfault.com/questions/349065/clustering-tcp-servers-so-can-send-data-to-all-clients, but I'd also like a programmers perspective on the problem.
I'm developing a real-time mobile app by setting up a TCP connection between the app and server backend. Each user can send messages to all other users.
(I'm making the TCP server in Python with Twisted, am creating my own 'protocol' for communication between the app/backend and hosting it on Amazon Web Services.)
Currently I'm trying to make the backend scalable (and reliable). As far as I can tell, the system could cope with more users by upgrading to a bigger server (which could become rather limiting), or by adding new servers in a cluster configuration - i.e. having several servers sitting behind a load balancer, probably with 1 database they all access.
I have sketched out the rough architecture of this:
However what if the Red user sends a message to all other connected users? Red's server has a TCP connection with Red, but not with Green.
I can think of a one way to deal with this problem:
Each server could have an open TCP (or SSL) connection with each other server. When one server wants to send a message to all users it simply passes this along it's connection to the other servers. A record could be kept in the database of which servers are online (and their IP address), and one of the servers could be a boss - i.e. decides if others are up and running, if not it could remove them from the database (if a server was up and lost it's connection to the boss it could check the database and see if it had been removed, and restart if it had - else it could assume the boss was down.)
Clearly this needs refinement but shows the general principle.
Alternatively I'm not sure if this is possible (- definitely seems like wishful thinking on my part):
Perhaps users could just connect to a box or router, and all servers could message all users through it?
If you know how to cluster TCP servers effectively, or a design pattern that provides a solution, or have any comments at all, then I would be very grateful. Thank you :-)
You need to decide (or if you already did this - to share these decisions with us) reliability requirements for your system: should all messages be sent to all users in any case (e.g. one or more servers crashed), can you tolerate sending the same message twice to the same user on server crash? Your system complexity depends directly on these decisions.
The simplest version is when a message is not delivered to all users on server crash. All your servers keep TCP connection to each other. One of them receives a message from a user and sends it to all other connected users (to this server) and to all other connected servers. Other servers send this message to all their users. To scale the system you just run additional server which connects to all existing servers.
Have a look how it is handled with IRC servers. They essentially can do this already. Everbody can send to everybody else, on all servers. Or just to single users, also on another server. And to groups, called "channels". It works best by routing amongst the servers.
It's not that hard, if you can make sure the servers know each other and can talk to each other.
On a side note: At 9/11, the most reliable internet news source was the IRC network. All the www sites were down because of bandwidth; it took them ages to even get a plain-text web page back up. During this time, IRC networks were able to provide near real-time, moderated news channels across the atlantic. You maybe could no longer log into a server on the other side, but at least the servers were able to keep up a server-to-server connection across.
An obvious choice is to use the DB as a clearinghouse for messages. You have to store incoming messages somewhere anyway, lest they be lost if a server suddenly crashes. Put incoming messages into the central database and have notification processes on the TCP servers grab the messages and send them to the correct users.
TCP server cannot be clustered, the snapshot you put here is a classic HTTP server example.
Since the device will send TCP connection to server, say, pure socket, there will be noway of establishing a load-balancing server.

How would I keep a constant piece of data updated through a socket in Python?

I have a client and a server, both written in Python 2.7.
Lets say I wanted to make a multiplayer game server (which I don't at the moment but I'm working towards it). I would need to keep the server up to date (and other clients) on my characters whereabouts, correct?
How would I do this with sockets? Send or request information only when it is needed (e.g the character moves, or another players character moves and the server sends the information to other clients) or would I keep a constant socket open to send data real-time of EVERYBODY's movement regardless of if they have actually done something since the last piece of data was sent or not.
I won't struggle coding it, I just need help with the concept of how I would actually do it.
With TCP sockets it is more typical to leave the connections open, given the teardown & rebuild cost.
Eventually when scaling you will do look into NewIO\RawIO.
If you do not, imagine that the game client might take a step & not get confirmation if sending it to the server & other players.
Definitely keep the socket open, but you should consider using something like ZeroMQ which gives you more kinds of sockets to work with.

Categories