Learning python for security, having trouble with su - python

Preface: I am fully aware that this could be illegal if not on a test machine. I am doing this as a learning exercise for learning python for security and penetration testing. This will ONLY be done on a linux machine that I own and have full control over.
I am learning python as my first scripting language hopefully for use down the line in a security position. Upon asking for ideas of scripts to help teach myself, someone suggested that I create one for user enumeration.The idea is simple, cat out the user names from /etc/passwd from an account that does NOT have sudo privileges and try to 'su' into those accounts using the one password that I have. A reverse brute force of sorts, instead of a single user with a list of passwords, Im using a single password with a list of users.
My issue is that no matter how I have approached this, the script hangs or stops at the "Password: " prompt. I have tried multiple methods, from using os.system and echoing the password in, passing it as a variable, and using the pexpect module. Nothing seems to be working.
When I Google it, all of the recommendations point to using sudo, which in this scenario, isnt a valid option as the user I have access to, doesnt have sudo privileges.
I am beyond desperate on this, just to finish the challenge. I have asked on reddit, in IRC and all of my programming wizard friends, and beyond echo "password" |sudo -S su, which cant work because the user is not in the sudoers file, I am coming up short. When I try the same thing with just echo "password"| su I get su: must be run from a terminal. This is at a # and $ prompt.
Is this even possible?

The problem is that su and friends read the password directly from the controlling terminal for the process, not from stdin. The way to get around this is to launch your own "pseudoterminal" (pty). In python, you can do that with the pty module. Give it a try.
Edit: The documentation for python's pty module doesn't really explain anything, so here's a bit of context from the Unix man page for the pty device:
A pseudo terminal is a pair of character devices, a master
device and a slave device. The slave device provides to a process an
interface identical to that described in tty(4). However, whereas all
other devices which provide the interface described in tty(4) have a
hardware device of some sort behind them, the slave device has, instead,
another process manipulating it through the master half of the pseudo
terminal. That is, anything written on the master device is given to the
slave device as input and anything written on the slave device is presented as input on the master device. [emphasis mine]
The simplest way to get your pty working is with pty.fork(), which you use like a regular fork. Here's a simple (REALLY minimal) example. Note that if you read more characters than there are available, your process will deadlock: It will try to read from an open pipe, but the only way for the process at the other end to generate output will be if this process sends it something!
pid, fd = pty.fork()
if pid == 0:
# We're the child process: Switch to running a command
os.execl("/bin/cat", "cat", "-n")
print "Exec failed!!!!"
else:
# We're the parent process
# Send something to the child process
os.write(fd, "Hello, world!\n")
# Read the terminal's echo of what we typed
print os.read(fd, 14) ,
# Read command output
print os.read(fd, 22)
If all goes well you should see this:
Hello, world!
1 Hello, world!
Since this is a learning exercise, here's my suggested reading list for you: man fork, man execl, and python's subprocess and os modules (since you're already running subprocess, you may already know some of this). Keep in mind the difference, in Unix and in python, between a file descriptor (which is just a number) and a file object, which is a python object with methods (in C it's a structure or such). Have fun!

If you just want to do this for learning, you can easily build a fake environment with your own faked passwd-file. You can use some of the built-in python encrypt method to generate passwords. this has the advantage of proper test cases, you know what you are looking for and where you should succeed or fail.

Related

Paramiko read stdout after every stdin [duplicate]

I am writing a program in Python which must communicate through SSH with a physical target, and send to this targets some commands automatically (it is for testing).
I start by doing this with Paramiko and everything was perfect until I have to send several commands and when for example the second one must be execute in the context of the first (for example the first one makes cd /mytargetRep and the second one is ./executeWhatIWant). I can't use exec_command to do so, because each exec_command starts a new session.
I try to use a channel with invoke_shell(), but I have an other problem with this one: I don't know when command execution is ended by doing this. I can have some very short (in time) command execution, and some other are really more longer so I need to know when the command execution is over.
I know a workaround it to use exec_command with a shell logic operations such as && or using ;. For example exec_command("cd /mytargetRep && ./executeWhatIWant"). But I can't do that, because it must also be possible to execute some commands manually (I have a minimalist terminal where I can send commands), so for example, the user will make cd /mytargetRep then ./executeWhatIWant and not cd /mytargetRep && ./executeWhatIWant.
So my question is: is there a solution by using Paramiko to send several commands in a same SSH session and be able to know the end of the command execution?
Thanks
It seems that you want to implement an interactive shell, yet you need to control individual commands execution. That's not really possible with just SSH interface. "shell" channel in SSH is black box with an input and output. So there's nothing in Paramiko that will help you implementing this.
If you need to find out when a specific command finishes or where an output of a specific command ends, you need to use features of a shell.
You can solve that by inserting a unique separator (string) in between and search for it in the channel output stream. With a common *nix shells something like this works:
channel = ssh.invoke_shell()
channel.send('cd /mytargetRep\n')
channel.send('echo unique-string-separating-output-of-the-commands\n')
channel.send('./executeWhatIWant\n')
Though I do not really think that you need that very often. Most commands that are needed to make a specific commands working, like cd or set, do not really output anything.
So in most cases you can use SSHClient.exec_command and your code will be a way simpler and more reliable:
Execute multiple commands in Paramiko so that commands are affected by their predecessors
Even if you need to use something seemingly complex like su/sudo, it is still better to stick with SSHClient.exec_command:
Executing command using "su -l" in SSH using Python
For a similar question, see:
Combining interactive shell and recv_exit_status method using Paramiko

Execute scp using Popen without having to enter password

I have the following script
test.py
#!/usr/bin/env python2
from subprocess import Popen, PIPE, STDOUT
proc = Popen(['scp', 'test_file', 'user#192.168.120.172:/home/user/data'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
out, err = proc.communicate(input='userpass\n')
print('stdout: ' + out)
print('stderr: ' + str(err))
which is meant to copy test_file in a remote directory /home/user/data located at 10.0.0.2 using login for a given user user. In order to do that I must use scp. No key authentification is allowed (don't ask why, it's just how things are and I cannot change them).
Even though I am piping userpass to the process I still get a prompt inside the terminal to enter password. I want to just run test.py on the local machine and then the remote gets the file without any user interaction.
I though that I'm not using communicate() correctly so I manually called
proc.stdin.write('userpass\n')
proc.stdin.flush()
out, err = proc.communicate()
but nothing changed and I still got that password prompt.
When scp or ssh attempt to read a password they do not read it from stdin. Instead they open /dev/tty and read the password direct from the connected terminal.
sshpass works by creating its own dummy terminal and spawning ssh or scp in a child process controlled by that terminal. That's basically the only way to intercept the password prompt. The recommended solution is to use public key authentication, but you say you cannot do that.
If as you say you cannot install sshpass and also cannot use a secure form of authentication then about the only thing you can do is re-implement sshpass in your own code. sshpass itself is licensed under the GPL, so if you copy the existing code be sure not to infringe on its copyleft.
Here's the comment from the sshpass source which describes how it manages to spoof the input:
/*
Comment no. 3.14159
This comment documents the history of code.
We need to open the slavept inside the child process, after "setsid", so that it becomes the controlling
TTY for the process. We do not, otherwise, need the file descriptor open. The original approach was to
close the fd immediately after, as it is no longer needed.
It turns out that (at least) the Linux kernel considers a master ptty fd that has no open slave fds
to be unused, and causes "select" to return with "error on fd". The subsequent read would fail, causing us
to go into an infinite loop. This is a bug in the kernel, as the fact that a master ptty fd has no slaves
is not a permenant problem. As long as processes exist that have the slave end as their controlling TTYs,
new slave fds can be created by opening /dev/tty, which is exactly what ssh is, in fact, doing.
Our attempt at solving this problem, then, was to have the child process not close its end of the slave
ptty fd. We do, essentially, leak this fd, but this was a small price to pay. This worked great up until
openssh version 5.6.
Openssh version 5.6 looks at all of its open file descriptors, and closes any that it does not know what
they are for. While entirely within its prerogative, this breaks our fix, causing sshpass to either
hang, or do the infinite loop again.
Our solution is to keep the slave end open in both parent AND child, at least until the handshake is
complete, at which point we no longer need to monitor the TTY anyways.
*/
So what sshpass is doing is opening a pseudo terminal device (using posix_openpt), then forks and in the child process makes the slave the controlling pt for the process. Then it can exec the scp command.
I don't know if you can get this to work from Python, but the good news is the standard library does include functions for working with pseudo terminals: https://docs.python.org/3.6/library/pty.html

Python socket program with shell script?

I have two machines connected by a switch. I have a popular server application which we can call "SXC_SERVER" on machine A and I interrogate the "SXC_SERVER" with the corresponding application from machine B, which I'll call "SXC_CLIENT". What I am trying to do is two-fold:
firstly, gain the traffic flow of SXC_SERVER and SXC_CLIENT interaction through tcpdump. The interaction between the two is a simple GET and RESPONSE, but I require the traffic traces.
secondly, I am wanting to log the Resident Set Size (RSS) usage of the SXC_SERVER process during each interaction/iteration
Moreover, I don't just need one traffic trace of the communication and one memory usage log of the SXC_SERVER process otherwise I wouldn't be writing this because I could go away and do that in ten minutes... In fact I am aiming to do very many! But let's say here for simplicity I want to do 10.
Since this will be very labor intensive as it will require me to be at both machines stopping and starting all of the SCX_CLIENT-to-SXC_SERVER interrogation, the tcpdump traffic capture, and the RSS memory usage of SXC_SERVER logging I want to write an automation script.
But! I am not a programmer, or software guy...(darn)
However, that said I can imaging a separate client/server program that oversees this automation, which we can call AUTO_SERVER and AUTO_CLIENT. My thoughts are that machine B would run AUTO_CLIENT and machine A would run AUTO_SERVER. The aim of both are to facilitate the automation, i.e. the stopping and starting of the tcpdump, and the memory logging on machine A of SXC_SERVER process before machine B queries SXC_SERVER with SXC_CLIENT (if you follow me!).
Effectively after one run of the SXC_SERVER-to-SXC_CLIENT GET/RESPONSE interaction I'll end up with:
one traffic capture *.pcap file called n1.pcap
and one memory log dump (of the RSS associated to the process) called n1.csv.
I am not a programmer or software guy but I can see a rough method (to the best of my ability) to achieve this, as follows:
Machine A: AUTO_SERVER
BEGIN:
msgRecieved = open socket(listen on port *n*)
DO
1. wait for machine A to tell me when to start watch (as in the program) to log RSS memory usage of the SXC_SERVER process using hardcoded command:
watch -n 0.1 'ps -p $(pgrep -d"," -x snmpd) -o rss= | awk '\''{ i += $1 } END { print i }'\'' >> ~/Desktop/mem_logs/mem_i.csv
UNTIL (messageRecieved == "FINISH")
quit
END.
Machine B: AUTO_CLIENT
BEGIN:
open socket(new)
for i in 10, do
1. locally start tcpdump with hardcoded hardcoded tcpdump command with relevant filter to only capture the SXC_SERVER-to-SXC_CLIENT traffic and set output flag to capture all traffic to a PCAP file called n*i*.pcap where *i* is the integer of the current for loop, saving the file in folder "~/Desktop/test_captures/".
2. Send the GET request to SXC_SERVER
3. wait for RESPONSE reply from SXC_SERVER
4. after recieved reply tell machine B to stop watch command
i++
5. send string "FINISH" to machine A.
END.
As you can see I would assume that this would be achieved by the use of a separate, and small client/server-like program (which here I've called AUTO_SERVER and AUTO_CLIENT) on both machines. The really rought pseudo-code design should be self-explanatory.
I have found a small client/server socket program located here: http://www.velvetcache.org/2010/06/14/python-unix-sockets which I would think may be suitable if I edit it, but I am not sure how exactly I can feasibly achieve this. Which is where you may be able to provide some assistance.
Can Python to do this automating?
Can it be done with a single bash script?
Do you think I am on the right path with this?
Or have you any helpful suggestions?
Regards.
You can use Python for this kind of thing, but I would strongly recommend using SSH for the bulk of the work (rather than coding the connection stuff yourself), and then using either a bash script or Python script to launch the tcpdump etc. processes.
Your question, however, is a bit too open-ended for stackoverflow - it sounds like you are asking someone to write this program for you, rather than for help with a specific problem.

How can I detect what other copy of Python script is already running

I have a script. It uses GTK. And I need to know if another copy of scrip starts. If it starts window will extend.
Please, tell me the way I can detect it.
You could use a D-Bus service. Your script would start a new service if none is found running in the current session, and otherwise send a D-Bus message to the running instace (that can send "anything", including strings, lists, dicts).
The GTK-based library libunique (missing Python bindings?) uses this approach in its implementation of "unique" applications.
You can use a PID file to determine if the application is already running (just search for "python daemon" on Google to find some working implementations).
If you detected that the program is already running, you can communicate with the running instance using named pipes.
The new copy could search for running copies, fire a SIGUSER signal and trigger a callback in your running process that then handles all the magic.
See the signal library for details and the list of things that can go wrong.
I've done that using several ways depending upon the scenario
In one case my script had to listen on a TCP port. So I'd just see if the port was available it'd mean it is a new copy. This was sufficient for me but in certain cases, if the port is already in use, it might be because some other kind of application is listening on that port. You can use OS calls to find out who is listening on the port or try sending data and checking the response.
In another case I used PID file. Just decide a location and a filename, and everytime your script starts, read that file to get a PID. If that PID is running, it means another copy is already there. Otherwise create that file and write your process ID in it. This is pretty simple. If you are using django then you can simply use django's daemonizer: "from django.utils import daemonize". Otherwise you can use this script: http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/

Have a Python CGI call a Perl CGI, passing original info (to limit searching private Mailman archives to logged-in users)

I need to have a Python CGI script do some stuff (a little bit of security checking), and then end up calling a Perl CGI script, passing anything it received (e.g., POST info) onto the Perl script.
For background, my reason for doing this is that I'm trying to integrate Swish searching with Mailman list archives.
Swish searching uses swish.cgi, a Perl script, but because these are private list archives I can't just allow people to call swish.cgi directly as recommended on this page: http://wpkg.org/Integrating_Mailman_with_a_Swish-e_search_engine#Mailman_configuration
I believe what I need to do is have the Mailman "private" cgi-bin file (written in Python) do its regular security checking (which calls a few Mailman/python modules) and THEN call on swish.cgi to do the search (after having verified that the user is on the mailing list).
Essentially, I believe the simplest solution would just be to protect access to the swish.cgi Perl script with a variant of the standard mailman cgi-bin/private Python script.
(I considered the idea that people could search with a non-protected swish.cgi, and people wouldn't be able to view the full results because those posts are already password-protected by default Mailman setup... but the problem is that even showing the Swish post excerpts in the search results could expose confidential information, so I must restrict access to even the search itself to just subscribers.)
If someone has a better idea of how to solve the overall problem without doing the Python-CGI-calls-Perl-CGI I'll be happy to consider that the "answer".
Just know that my goal is to make little (ideally no) changes to the standard Mailman installation. Copying the "private" cgi-bin script (whose source is mailman-2.1.12/Mailman/Cgi/private.py) and making changes to call swish.cgi is cool, but modifying the existing private cgi-bin script wouldn't really be cool.
Here's what I did to test the answer (using os.execv to replace the python script with the perl script, so that the perl script will inherit the python script's environment):
I created a pythontest script with:
import os
os.environ['FOO'] = 'BAR'
mydir = os.path.dirname(os.environ.get('SCRIPT_FILENAME'))
childprog = mydir + '/perltest'
childargs = []
os.execv(childprog, childargs)
Then a perltest script with:
print "Content-type: text/html\n\n";
while (($key,$value) = each %ENV) {
print "<p>$key=$value</p>\n";
}
Then I called http://myserver.com/cgi-bin/pythontest and saw that the environment printout included the custom FOO variable so the child perltest process had successfully inherited all the environment variables.
I'm just going to state the obvious here because I don't have any detailed knowledge about your specific environment.
If your python script is a genuine CGI and not a mod_python script or similar then it is just a regular process spawned to handle the one request. You can use os.execv to replace it with another process (e.g. the perl CGI) and the new process will inherit the current process' environment, stdin, stdout and stderr. This assumes that you don't need to read stdin for your security checks. It may also depend on whether your CGI is running in a restricted environment. execv is potentially dangerous and might be blocked in such an environment.
If you're running from a mod_python environment or if you need to peek at posted data (i.e. stdin) then the execv approach isn't available to you. You have two main alternatives.
You could run the perl CGI directly (e.g. look at the subprocess module) handing it a correct environment and feeding it the correct data to its stdin. You can the spool the returned data from its stdout raw (or cooked if needed) directly back to the web server.
Otherwise, you could make a local web request to run the CGI. This is likely to require a bit less knowledge about the server setup, but a bit more work in the python CGI to make and handle the HTTP request.

Categories