import logging
import graypy
my_logger = logging.getLogger('test_logger')
my_logger.setLevel(logging.DEBUG)
handler = graypy.GELFHandler('my_graylog_server', 12201)
my_logger.addHandler(handler)
my_adapter = logging.LoggerAdapter(logging.getLogger('test_logger'),
{ 'username': 'John' })
my_adapter.debug('Hello Graylog2 from John.')
is not working
I think the issue is the url that should send to /gelf
because when I curl from the terminal to my graylog server , it works
curl -XPOST http://my_graylog_server:12201/gelf -p0 -d '{"short_message":"Hello there", "host":"example1111.org", "facility":"test", "_foo":"bar"}'
Open your Graylog Webapp, click 'System'. You will see a list of links on the right. On of them is 'Input', click this one.
Now you have an overview of all running inputs, listening on different ports. On top of the page you can create a new one. There should be a drop box containing certain modes for the input listener (I think 'GELF AMQP' is standard). Change this one to GELF UDP and click 'Launch new input' in the next dialogue you can specify a port for the service.
You also have to set a node where the messages have to be stored. This node as (or should have) the same IP as your whole graylog2 system.
Now you should be able to receive messages
I think you've setup your input stream For Gelf TCP instead of UDP. I set up a TCP Stream and it got the curl message. However my python application wouldn't send to the stream. I then created a Gelf UDP stream and voila! I ran into this same issue configuring my Graylog EC2 Appliance just a few moments ago.
Also make sure firewall/security groups aren't blocking UDP protocol on port 12201 as well.
Good luck, and I hope this is your issue!
Related
I have an application to stream a video on web using flask. Something like an example. But sometimes when the user closes its connection flask does not recognize the disconnection and keeps the socket open. Each socket in Linux is a file-descriptor and the maximum number of open file-descriptors in Linux is 1024 by default. After a while (e.g. 24 hours) new users cannot see the video stream because flask cannot create new socket (which is a file-descriptor).
The same happens when I use flask sockets: from flask_sockets import Sockets. I dont know what happens to these sockets. Most of the time when user refreshes the browser or closes it normally, the socket get closed on server.
I made a test and removed the network cable from my laptop (as a client) and realized that in this case sockets continue to be open and flask does not recognize this kind of disconnection.
Any help will be appreciated.
Update 1:
I put this code on top of main script and it can recognize some of disconnections. But the main problem still is there.
import socket
socket.setdefaulttimeout(1)
Update 2:
Seems duplicated with this post but no solution. I checked the sockets status(by sudo lsof -p your_process_id | egrep 'CLOSE_WAIT') and all of them are "CLOSE_WAIT".
I've been trying to use various Python libraries for working with Connman and the dbus, particularly this sample code:
https://github.com/liamw9534/pyconnman/blob/master/demo/demo.py
The problem I have is that when connecting to a WPA2 access point for the very first time, I will always receive a timeout message. For example:
CONN> list-services
CONN> agent-start /test/agent ssid=myNetwork passphrase=myPassphrase
CONN> service-connect /net/connman/service/wifi_xxxxx__managed_psk
Eventually this is the message I receive back from the interface:
Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken
I can confirm at this point that Connman has no connected to a wifi network or obtained an IP address. The only way I can manage to get this to work is by using the Connman application itself from a Linux terminal:
connmanctl
connmanctl> agent on
connmanctl> connect wifi_xxxxx__managed_psk
Agent RequestInput wifi_xxxxx__managed_psk
Passphrase = [ Type=psk, Requirement=mandatory ]
Passphrase? myPassword
connmanctl> Connected wifi_xxxxx__managed_psk
This creates a settings folder under /var/lib/connman for the wifi network. I can now use the demo.py script mentioned above to either disconnect or reconnect.
Connman is still a bit of a mystery to me in many ways, and I'm not sure why I have to use the interactive shell to connect to a network for the first time. Any ideas?
In case you're still looking for the answer :
Connman needs an agent to answer the security questions (in the WPA2: it's the password). You can either run an agent and reply to Connman questions or you can create a file in /var/lib/connman with the right keys. See here. Once a file is created or deleted Connman will auto magically act accordingly (try to connect or disconnect.
A basic file would look like:
[service_mywificonfig]
Type = wifi
Security = wpa2
Name = myssid
Passphrase = yourpass
Assuming I managed to be in the middle of the communication between a client and a server (let's say that I open up a hotspot and cause the client to connect to the server only through my machine).
How can I alter packets that my client sends and receives without interrupting my own communication with other services? There must be a way to route all of the packets the client both sends and is about to receive (before forwarding them to him) through my script.
I think that the correct direction of going about accomplishing this is with iptables but not sure exactly what arguments would fit to make this work. I already have the following simple script:
hotspotd start #a script that runs dnsmasq as both a DNS and DHCP server, configures and starts a hotspot
iptables -P FORWARD ACCEPT
iptables --append FORWARD --in-interface wlan0 -j ACCEPT
iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE
#wlan0 is the interface on which the hotspot is.
#eth0 is the interface that is connected to the internet
Now, this perfectly works for a passive MITM - I can see everything that the client sends and receives. But now I want to step it up and redirect every message he sends and receives through me.
My eventual purpose is to get to a level where I could execute the following script:
from scapy.all import *
from scapy_http.http import *
def callback(pkt):
#Process the packet here, see source and destination addresses, ports, data
send(pkt)
sniff(filter='port 666', prn=callback) #Assuming all relevant packets are redirected to port 666
How do I accomplish redirecting every packet the client sends and is-about-to-receive?
You can use NFQUEUE which has python bindings.
NFQUEUE is a userspace queue that is a valid iptables target. You can redirect some traffic to the NFQUQUE:
iptables -I INPUT -d 192.168.0.0/24 -j NFQUEUE --queue-num 1
Then access the packets from your code:
from netfilterqueue import NetfilterQueue
def print_and_accept(pkt):
print(pkt)
pkt.accept()
nfqueue = NetfilterQueue()
nfqueue.bind(1, print_and_accept)
try:
nfqueue.run()
except KeyboardInterrupt:
print('')
nfqueue.unbind()
Note the pkt.accept() call. This returns a verdict to the nfqueue, telling it that it should accept the packet - i.e. allow it to continue along its normal route in the kernel. To modify a packet, instead of accepting it, you'd need to copy it, return a drop verdict, and finally resend it with the included modifications.
I'm trying to run Tor through python. My goal is to be able to switch exits or otherwise alter my IP from time-to-time at a time of my choosing. I've followed several tutorials and encountered several different errors.
This code prints my IP address
import requests
r = requests.get('http://icanhazip.com/')
r.content
# returns my regular IP
I've downloaded and 'installed' the Tor browser (although it seems to run from an .exe). When it is running, I can use the following code to return my Tor IP, which seems to change every 10 minutes or so, so everything seems to be working so far
import socks
import socket
socks.set_default_proxy(socks.SOCKS5, "127.0.0.1", 9150)
socket.socket = socks.socksocket
r = requests.get('http://icanhazip.com/')
print r.content
# Returns a different IP
Now, I've found instructions here on how to request a new identity programmatically, but this is where things begin to go wrong. I run the following code
from stem import Signal
from stem.control import Controller
with Controller.from_port(port = 9151) as controller:
controller.authenticate()
controller.signal(Signal.NEWNYM)
and I get the error "SOCKS5Error: 0x01: General SOCKS server failure" at the line that creates the controller. I thought this might be a configuration problem - the instructions say that there should be a config file called 'torrc' that contains the port numbers, apart from other things.
Eventually, in the directory C:\Users\..myname..\Desktop\Tor Browser\Browser\TorBrowser\Data\Tor I found three files, a 'torrc', a 'torrc-defaults', and a 'torrc.orig.1'. My 'torrc-defaults' looks similar to the 'torrc' shown in the documentation, my 'torrc.orig.1' is empty, and my 'torrc' has two lines of comments that look as follows with some more settings afterwards but no port settings
# This file was generated by Tor; if you edit it, comments will not be preserved
# The old torrc file was renamed to torrc.orig.1 or similar, and Tor will ignore it
...
I've tried to make sure that the two ports listed in the 'torrc-defaults' match the ports in the socks statement at the beginning and the controller statment further on. When these are 9150 and 9151 respectively I get the general SOCKS server failure listed above.
When I try and run the Controller with the wrong port (in this case, 9051 which was recommended in other posts but for me leads to Tor browser failing to load when I adjust the 'torrc-defaults') then I instead get the error message "[Errno 10061] No connection could be made because the target machine actively refused it"
Finally, when I run the Controller with Tor browser running in the background but without first running the SOCKS statement, I instead get a lot of warning text and finally an error, as shown in part below:
...
...
250 __OwningControllerProcess
DEBUG:stem:GETCONF __owningcontrollerprocess (runtime: 0.0010)
TRACE:stem:Sent to tor:
SIGNAL NEWNYM
TRACE:stem:Received from tor:
250 OK
TRACE:stem:Received from tor:
650 SIGNAL NEWNYM
TRACE:stem:Sent to tor:
QUIT
TRACE:stem:Received from tor:
250 closing connection
INFO:stem:Error while receiving a control message (SocketClosed): empty socket content
At first I thought this looked quite promising - I put in a quick check line like this both before and after the new user request
print requests.get('http://icanhazip.com/').content
It prints the IP, but unfortunately it's the same both before and afterwards.
I'm pretty lost here - any support would be appreciated!
Thanks.
Get your hashed password :
tor --hash-password my_password
Modify the configuration file :
sudo gedit /etc/tor/torrc
Add these lines
ControlPort 9051
HashedControlPassword 16:E600ADC1B52C80BB6022A0E999A7734571A451EB6AE50FED489B72E3DF
CookieAuthentication 1
Note :- The HashedControlPassword is generated from first command
Refer this link
I haven't tried it on my own because I am using mac, but I hope this helps with configuration
You can use torify to achieve your goal. for example run the python interpreter as this command "sudo torify python" and every connection will be routed through TOR.
Reference on torify: https://trac.torproject.org/projects/tor/wiki/doc/TorifyHOWTO
I'm looking to start a web project using Flask and its SocketIO plugin, which depends on gevent (something something greenlets), but I don't understand how gevent relates to the webserver. Does using gevent restrict my server choice at all? How does it relate to the different levels of web servers that we have in python (e.g. Nginx/Apache, Gunicorn)?
Thanks for the insight.
First, lets clarify what we are talking about:
gevent is a library to allow the programming of event loops easily. It is a way to immediately return responses without "blocking" the requester.
socket.io is a javascript library create clients that can maintain permanent connections to servers, which send events. Then, the library can react to these events.
greenlet think of this a thread. A way to launch multiple workers that do some tasks.
A highly simplified overview of the entire process follows:
Imagine you are creating a chat client.
You need a way to notify the user's screens when anyone types a message. For this to happen, you need someway to tell all the users when a new message is there to be displayed. That's what socket.io does. You can think of it like a radio that is tuned to a particular frequency. Whenever someone transmits on this frequency, the code does something. In the case of the chat program, it adds the message to the chat box window.
Of course, if you have a radio tuned to a frequency (your client), then you need a radio station/dj to transmit on this frequency. Here is where your flask code comes in. It will create "rooms" and then transmit messages. The clients listen for these messages.
You can also write the server-side ("radio station") code in socket.io using node, but that is out of scope here.
The problem here is that traditionally - a web server works like this:
A user types an address into a browser, and hits enter (or go).
The browser reads the web address, and then using the DNS system, finds the IP address of the server.
It creates a connection to the server, and then sends a request.
The webserver accepts the request.
It does some work, or launches some process (depending on the type of request).
It prepares (or receives) a response from the process.
It sends the response to the client.
It closes the connection.
Between 3 and 8, the client (the browser) is waiting for a response - it is blocked from doing anything else. So if there is a problem somewhere, like say, some server side script is taking too long to process the request, the browser stays stuck on the white page with the loading icon spinning. It can't do anything until the entire process completes. This is just how the web was designed to work.
This kind of 'blocking' architecture works well for 1-to-1 communication. However, for multiple people to keep updated, this blocking doesn't work.
The event libraries (gevent) help with this because they accept and will not block the client; they immediately send a response and when the process is complete.
Your application, however, still needs to notify the client. However, as the connection is closed - you don't have a way to contact the client back.
In order to notify the client and to make sure the client doesn't need to "refresh", a permanent connection should be open - that's what socket.io does. It opens a permanent connection, and is always listening for messages.
So work request comes in from one end - is accepted.
The work is executed and a response is generated by something else (it could be a the same program or another program).
Then, a notification is sent "hey, I'm done with your request - here is the response".
The person from step 1, listens for this message and then does something.
Underneath is all is WebSocket a new full-duplex protocol that enables all this radio/dj functionality.
Things common between WebSockets and HTTP:
Work on the same port (80)
WebSocket requests start off as HTTP requests for the handshake (an upgrade header), but then shift over to the WebSocket protocol - at which point the connection is handed off to a websocket-compatible server.
All your traditional web server has to do is listen for this handshake request, acknowledge it, and then pass the request on to a websocket-compatible server - just like any other normal proxy request.
For Apache, you can use mod_proxy_wstunnel
For nginx versions 1.3+ have websocket support built-in