I have two machines connected by a switch. I have a popular server application which we can call "SXC_SERVER" on machine A and I interrogate the "SXC_SERVER" with the corresponding application from machine B, which I'll call "SXC_CLIENT". What I am trying to do is two-fold:
firstly, gain the traffic flow of SXC_SERVER and SXC_CLIENT interaction through tcpdump. The interaction between the two is a simple GET and RESPONSE, but I require the traffic traces.
secondly, I am wanting to log the Resident Set Size (RSS) usage of the SXC_SERVER process during each interaction/iteration
Moreover, I don't just need one traffic trace of the communication and one memory usage log of the SXC_SERVER process otherwise I wouldn't be writing this because I could go away and do that in ten minutes... In fact I am aiming to do very many! But let's say here for simplicity I want to do 10.
Since this will be very labor intensive as it will require me to be at both machines stopping and starting all of the SCX_CLIENT-to-SXC_SERVER interrogation, the tcpdump traffic capture, and the RSS memory usage of SXC_SERVER logging I want to write an automation script.
But! I am not a programmer, or software guy...(darn)
However, that said I can imaging a separate client/server program that oversees this automation, which we can call AUTO_SERVER and AUTO_CLIENT. My thoughts are that machine B would run AUTO_CLIENT and machine A would run AUTO_SERVER. The aim of both are to facilitate the automation, i.e. the stopping and starting of the tcpdump, and the memory logging on machine A of SXC_SERVER process before machine B queries SXC_SERVER with SXC_CLIENT (if you follow me!).
Effectively after one run of the SXC_SERVER-to-SXC_CLIENT GET/RESPONSE interaction I'll end up with:
one traffic capture *.pcap file called n1.pcap
and one memory log dump (of the RSS associated to the process) called n1.csv.
I am not a programmer or software guy but I can see a rough method (to the best of my ability) to achieve this, as follows:
Machine A: AUTO_SERVER
BEGIN:
msgRecieved = open socket(listen on port *n*)
DO
1. wait for machine A to tell me when to start watch (as in the program) to log RSS memory usage of the SXC_SERVER process using hardcoded command:
watch -n 0.1 'ps -p $(pgrep -d"," -x snmpd) -o rss= | awk '\''{ i += $1 } END { print i }'\'' >> ~/Desktop/mem_logs/mem_i.csv
UNTIL (messageRecieved == "FINISH")
quit
END.
Machine B: AUTO_CLIENT
BEGIN:
open socket(new)
for i in 10, do
1. locally start tcpdump with hardcoded hardcoded tcpdump command with relevant filter to only capture the SXC_SERVER-to-SXC_CLIENT traffic and set output flag to capture all traffic to a PCAP file called n*i*.pcap where *i* is the integer of the current for loop, saving the file in folder "~/Desktop/test_captures/".
2. Send the GET request to SXC_SERVER
3. wait for RESPONSE reply from SXC_SERVER
4. after recieved reply tell machine B to stop watch command
i++
5. send string "FINISH" to machine A.
END.
As you can see I would assume that this would be achieved by the use of a separate, and small client/server-like program (which here I've called AUTO_SERVER and AUTO_CLIENT) on both machines. The really rought pseudo-code design should be self-explanatory.
I have found a small client/server socket program located here: http://www.velvetcache.org/2010/06/14/python-unix-sockets which I would think may be suitable if I edit it, but I am not sure how exactly I can feasibly achieve this. Which is where you may be able to provide some assistance.
Can Python to do this automating?
Can it be done with a single bash script?
Do you think I am on the right path with this?
Or have you any helpful suggestions?
Regards.
You can use Python for this kind of thing, but I would strongly recommend using SSH for the bulk of the work (rather than coding the connection stuff yourself), and then using either a bash script or Python script to launch the tcpdump etc. processes.
Your question, however, is a bit too open-ended for stackoverflow - it sounds like you are asking someone to write this program for you, rather than for help with a specific problem.
Related
I have a task where I need to run some python file (call it app.py)
that uploads a server (using flask). This is done in run_tests function.
Then, I want to query this
server for some test inputs that I have. This is done in the function
get_sentences_and_test (I do not put its code here for simplicity of the question. It includes waiting for the server to be up, using sleep instructions, and then query it).
I use python mutiprocessing package, for process and subprocess.
My program has a very simple structure like:
def run_tests():
subprocess.call(['python3', path_to_app.py])
main:
api_proc = Process(target=run_tests)
api_proc.start()
get_sentences_and_test(api_proc)
api_proc.terminate()
My problem is this code works ok, and does what it supposed to do.
However, the port that the subcall in run_tests creates when the server is up and running is not
killed once the program is done. And, I have to kill it manually.
I want to know:
How can I kill the process that occupies this port?
What is the best practice to do this? This should be a day-to-day problem for
people working with services and multi processing\threading. Yet, I didn't find a simple
solution or many sources on this issue.
I know the question title is weird!.
I have two virtual machines. First one has limited resources, while the second one has enough resources just like normal machine. The first machine will receive a signal from an external device. This signal will trigger a python compiler to execute a script. The script is big and the first machine does not have enough resources to execute it.
I can copy the script to the second machine to run it there, but I can't make the second machine receive the external signal. I am wondering if there is a way to make the compiler on the first machine ( once the external signal received) call the compiler on the second machine, so the compiler on the second machine executes the script? so the second compiler should use the second machine resources. check the attached image please.
Assume that the connection is established between the two machines and they can see each other, and the second machine has a copy from the script. I just need the commands that pass ( the execution ) to the second machine and make it use its own resources.
You should look into the microservice architecture to do this.
You can achieve this either by using flask and sending server requests between each machine, or something like nameko, which will allow you to create a "bridge" between machines and call functions between them (seems like what you are more interested in). Example for nameko:
Machine 2 (executor of resource-intensive script):
from nameko.rpc import rpc
class Stuff(object):
#rpc
def example(self):
return "Function running on Machine 2."
You would run the above script through the Nameko shell, as detailed in the docs.
Machine 1:
from nameko.standalone.rpc import ClusterRpcProxy
# This is the amqp server that machine 2 would be running.
config = {
'AMQP_URI': AMQP_URI # e.g. "pyamqp://guest:guest#localhost"
}
with ClusterRpcProxy(config) as cluster_rpc:
cluster_rpc.Stuff.example() # Function running on Machine 2.
More info here.
Hmm, there's many approaches to this problem.
If you want a python only solution, you can check out
dispy http://dispy.sourceforge.net/
Or Dask. https://dask.org/
If you want a robust solution (what I use on my home computing cluster but imo overkill for your problem) you can use
SLURM. SLURM is basically a way to string multiple computers together into a "supercomputer". https://slurm.schedmd.com/documentation.html
For a semi-quick, hacky solution. You can write a microservice. Essentially, your "weak" computer will receive the message then send a http request to your "strong" computer. Your strong computer will contain the actual program, compute results, and pass back the result to your "weak" computer.
Flask is an easy and lightweight solution for this.
All of these solutions require some type of networking. At the least, the computers need to be on the same LAN or both have access over the web.
There are many other approaches not mentioned. For example, you can export a NFS (netowrk file storage) and have one computer put a file in the shared folder and the other computer perform work on the file. I'm sure there are plenty other contrived ways to accomplish this task :). I'd be happy to expand on a particular method if you want.
in my vps i have run 4 Python Script and its been 60 days that i don't reboot my vps and now i have to, but if i reboot vps my python Variables & data will be removed because i don't store them in file and they are store in variables in python script.
my OS is Ubuntu Server 16.04 LTS and i was run my python codes with nohup command until they can run in background.
Now i need a way to stop my scripts without removing they variables and start them with same variables and data after i reboot my vps.
Is There Any Way That I Can Do This?
In Addition, I'm sorry for writing mistakes in my question.
Python doesn't provide any way of doing this.
But you might be able to use CRIU, or a similar tool, to freeze and snapshot the interpreter process. Then, after restart, you can resume the snapshot into a new process that just picks up exactly where you left off.
It may not work.1 But there's a good chance it will. This is essentially the same thing as a Live Migration in the CRIU docs, except that you're not migrating to a new computer/container/etc., just to the future of the same computer. So, start reading with that page, and follow the links from there.
You should probably test before you commit to it.
* Try it (obviously don't include the system restart, just kill -9 the executable) on a Python script that doesn't do anything important (maybe increments a counter, print it out, sleep for a second, repeat.
* Maybe try it on a script that does similar kinds of stuff to what yours are doing.
* If it's safe to have two copies of one of your programs running at the same time (they're not going to stomp all over each other writing to the same file, or fight over the same socket, or whatever), start a second copy and test dump/kill/resume that.
* Try it on one of your real processes, still without restart.
* Try it on all four.
* Cross your fingers, sacrifice a chicken, and do it for real.
If that doesn't pan out, the only option I can think of is to go through your scripts, manually figure out everything that needs to be saved and how it could be accessed from the top-level global, and do that in the debugger.
Ideally, you'll write a script that will automate accessing and saving all that stuff—plus another one to feed it into a new instance at restart. Then you just pdb the live interpreters and start dumping everything.
This is guaranteed to be a whole lot of work, and not much fun. On the plus side, it is guaranteed to work if you do it right. On the third hand, it's pretty easy to not do it right.
1. If you rely on open files, pipes, sockets, etc., CRIU does about as much as you could do, which is more than you might expect at first, but still not everything you could possibly want… Also, if you're using almost all of your RAM, it can be hard to wedge things back into exactly the same state. And there are probably other possible issues.
So, I am participating in this python process competition where every candidate writes a script that should "kill" the others.
The winner is the one that the last message in dmesg (the kernel ring buffer) contains his name.
We will run all with root privileges.
There are no actual rules, in fact you can reboot the system and so on.
All the processes will be running at the same time on a Linux machine.
I'll appreciate some advises and ideas, Thanks !!
It appears you would like to write a python script that can
Catch signals
Write to the kernel ring buffer.
Can be handled with the signal module.
Can be handled with a C program to call the printk() function and a python function to call that C program. Another alternative may be to use /dev/kmsg.
If you get these pieces working, try writing to the kernel ring buffer when you catch signal(s).
I have script which can be run by any user who is connected to a server. This script writes to a single log file, but there is no restriction on who can use it at one time. So multiple people could attempt to write to the log and data might be lost. Is there a way for one instance of the code to know if other instances of that code are running? Moreover, is it possible to gather this information dynamically? (ie not allow data saving for the second user until the first user has completed hes/her task)
I know I could do this with a text file. So I could write the user name to the file when the start, then delete it when they finish, but this could lead to errors if the either step misses, such as an unexpected script termination. So what other reliable ways are there?
Some information on the system: Python 2.7 is installed on a Windows 7 64-bit server via Anaconda. All connected machines are also Windows 7 64-bit. Thanks in advance
Here is an implementation:
http://www.evanfosmark.com/2009/01/cross-platform-file-locking-support-in-python/
If you are using a lock, be aware that stale locks (that are left by hung or crashed processes) can be a bitch. Have a process that periodically searches for locks that were created longer than X minutes ago and free them.
It just in't clean allowing multiple users to write to a single log and hoping things go ok..
why dont you write a daemon that handles logs? other processes connect to a "logging port" and in the simplest case they only succeed if no one else has connected.
you can just modify the echoserver example given here: (keep a timeout in the server for all connections)
http://docs.python.org/release/2.5.2/lib/socket-example.html
If you want know exactly who logged what, and make sure no one unauthorized gets in, you can use unix sockest to restrict it to only certain uids/gids etc.
here is a very good example
NTEventLogHandler is probably the easiest way for logging to a given Windows machine/server, but it might make more sense to use SyslogHandler if you have a syslog sink on a Unix server.
The catch I can think of with SyslogHandler is that you'll likely need to poke holes through the Windows firewall in order to send packets over the syslog protocol, i.e., 514/TCP ("reliable syslog") and 514/UDP (traditional or "unreliable syslog").