I'm testing my own ddos protection feature implemented in my server (this is necessary). Currently I have a terrible loop for making multiple tor requests, each with it's own identity.
os.system("taskkill /f /im tor.exe")
os.startfile("C:/Tor/Browser/TorBrowser/Tor/tor.exe")
session = requests.session()
session.proxies = {}
session.proxies['http'] = 'socks5h://localhost:9050'
session.proxies['https'] = 'socks5h://localhost:9050'
Now I want to multithread this for faster speeds, since each tor connection takes ages to load.
If I google how to run multiple tor instances, I get info on how to do this from within the tor browser itself, never how to do it programmatically, Is there a way to do this on windows python3 specifically?
Any help appreciated
The key point to understand about running multiple separate Tor processes is that each one will need to listen on it's own ControlPort and SocksPort so that your clients can issue requests through each individual instance.
If you use Stem, stem.process.launch_tor_with_config would be the recommended way to launch multiple Tor processes. By using this method, you can pass the necessary config options dynamically to each client without having to create individual files, and you'll have better process management over the Tor instances.
If you want to use os, you will need to create one config file per instance and pass that to tor when you start it.
At minimum, create one torrc config file for each instance you want to run with the following:
torrc.1
ControlPort 9800
SocksPort 9801
torrc.2
ControlPort 9802
SocksPort 9803
Each individual client will connect on the different socks ports to issue requests.
To start them, use:
os.system("C:/Tor/Browser/TorBrowser/Tor/tor.exe -f C:/path/to/torrc.1")
os.system("C:/Tor/Browser/TorBrowser/Tor/tor.exe -f C:/path/to/torrc.2")
Then create one or more clients per instance:
session1 = requests.session()
session1.proxies = {}
session1.proxies['http'] = 'socks5h://localhost:9801'
session1.proxies['https'] = 'socks5h://localhost:9801'
session2 = requests.session()
session2.proxies = {}
session2.proxies['http'] = 'socks5h://localhost:9803'
session2.proxies['https'] = 'socks5h://localhost:9803'
first of All, install stem like this in terminal
>>>pip install stem
write this code in a text file and rename the file like this myfile.py
include stem and requests first like this in start of file and write following code
import requests
import stem.process
x = 6
for i in range(1, x):
cp = str(10000+i)
sp = str(11000+i)
tp1 = stem.process.launch_tor_with_config(tor_cmd = 'C:\\Users\\<Tor Directory>\\Browser\\TorBrowser\\Tor\\tor.exe', config = {
'ControlPort': cp,
'SocksPort' : sp,
'DataDirectory': 'C:/<Any Path for data directories>/proxies/'+str(i)+'/',
'Log': [
'NOTICE stdout',
'ERR file C:/<Any Path for Error file>/tor_error_log.txt',
],
},
)
proxies = {
'http': 'socks5h://127.0.0.1:'+str(sp),
'https': 'socks5h://127.0.0.1:'+str(sp)
}
r1 = requests.get('http://ipinfo.io/json', proxies=proxies)
print('\n')
print(r1.content)
print('\n')
now go into the folder that contains myfile.py and run command prompt(cmd) or any terminal there and launch the file like this.
>>>python myfile.py
this will launch 5 tor processes on these ports 11001,11002,11003,11004,11005
you can access the tor proxy(socks5) by using ip address 127.0.0.1 and any of the above ports from any program.
if you open task manager you will see 5 tor processes running that consumes 10-20mb of ram each process
if you get an error like this while running myfile.py in terminal,
can not bind listening port. working with config files left us in broken state. Dying
then just close all processes of tor and launch myfile.py again. this error happens because you have already a tor process running on the port.
to create more tor processes, close all tor instances from task manager, change the value of variable x in start of file like this
x = any integer like 10,20,30,50
save myfile.py and run this file again.
cheers!
Related
By no means do I write scripts very often, but I am trying to write a Nagios plugin to check the status of a RAID controller on a remote host. The issue is that the command to get the output requires elevated privileges. What would be the correct, and most effective way to pull this off? The goal is to run:
'/opt/MegaRAID/MegaCli/MegaCli64 -ShowSummary -a0'
on a remote host from the monitoring server,
and then follow the basic idea of this logic:
#Nagios Plugin for Testing LSI Raid Status
import os, sys
import argparse
import socket
import subprocess
#nagios exit codes do not change#
OK = 0
WARNING = 1
CRITICAL = 2
DEPENDENT = 3
UNKNOWN = 4
#nagios exit codes do not change#
#patterns to be searched
active = str("Active")
online = str("Online")
k = str("OK")
degrade = str("Degraded")
fail = str("Failed")
parser = argparse.ArgumentParser(description='Py3 script for monitoring RAID status.')
#arguments
parser.add_argument("--user",
metavar = '-U',
help = "username for remote connection")
parser.add_argument("--hostname",
metavar = '-H',
help = "hostname of the remote host")
args = parser.parse_args()
print(args)
#turning args into variables
hostname = args.hostname
user = args.user
ssh = subprocess.Popen(f"ssh {user}#{hostname} /opt/MegaRAID/MegaCli/MegaCli64 -ShowSummary -a0", shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
check = ssh.stdoutreadlines()
OK_STR = str("RAID is OK!")
WARN_STR = str("Warning! Something is wrong with the RAID!")
CRIT_STR = str("CRITICAL! THE RAID IS BROKEN")
UNK_STR = str("Uh oh! Something ain't right?")
if (degrade) in (check):
print(WARN_STR) and exit(WARNING)
elif (fail) in (check):
print (CRIT_STR) and exit(CRITICAL)
elif (active) or (online) or (k) in (check):
print(OK_STR) and exit(OK)
else:
print(UNK_STR) and exit(UNKNOWN)
Any thoughts? This is far from my forte (and also an unfinished script) so I apologize for the layman format and any confusion in my phrasing.
I am trying to write a Nagios plugin to check the status of a RAID controller on a remote host. The issue is that the command to get the output requires elevated privileges. What would be the correct, and most effective way to pull this off?
I would recommend running the script remotely over NRPE on the system in question, and then give the user the NRPE daemon is running as (probably nagios or similar) sudo permissions to run that script with some very exact parameters.
The nrpe.cfg file mentions this example:
# Usage scenario:
# Execute restricted commmands using sudo. For this to work, you need to add
# the nagios user to your /etc/sudoers. An example entry for alllowing
# execution of the plugins from might be:
#
# nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/
...but there's no reason to be so forgiving, you can make it a lot safer by only allowing an exact command:
nagios ALL = NOPASSWD: /usr/sbin/megacli
Note that this allows any parameters with that command, this is even safer as it will not allow any other variants (example):
nagios ALL = NOPASSWD: /usr/sbin/megacli -a foo -b bar -c5 -w1
Then configure the nrpe command to run the above with sudo before it, and it should work. You can verify by su:ing to the nagios user and trying the sudo command yourself.
Also, note that there are very likely some available modules you can import for python nagios plugins that makes it easier for you, to get built-in support for things like thresholds and their syntax.
I am new to using APIs. I need to run the same exe file twice at the same time and use API calls to communicate with each one. When I want to run the exe file just once, I follow these steps to communicate with it
import subprocess
import requests
command = "App.exe --restserver"
subprocess.Popen(command)
server_address = "127.0.0.1:5000"
end_point = "/load/"
parameters = {
"type" = "designI"
}
url = "http://" + server_address + end_point
response_load_designI = requests.post(f"{url}", parameters)
Now I want to run App.exe again but load "designII" this time while I still have "designI" up and running. I am wondering what will be the server address for the 2nd time I run App.exe? Is there a way to force the 2nd run to use a different address? I tried in two terminals to run them separately but in both terminals I got the same server address ('127.0.0.1:5000').
When the loadbalancer in front of the tested https web site fails-over, this generates some HTTPError 500 for few seconds, then Locust hangs:
The response time graph stops (empty graph)
The total requests per second turns to a wrong green flat line.
If I just stop & start the test, locust restart monitoring properly the response time.
We can see some HTTPError 500 in the Failures tab
Is this a bug ?
How can I make sure Locust kills and restarts users, either manually or when timeout ?
My attempt to regularly "RescheduleTaskImmediately" did not help.
My locustfile.py:
#!/usr/bin/env python
import time
import random
from locust import HttpUser, task, between, TaskSet
from locust.exception import InterruptTaskSet, RescheduleTaskImmediately
URL_LIST = [
"/url1",
"/url2",
"/url3",
]
class QuickstartTask(HttpUser):
wait_time = between(0.1, 0.5)
connection_timeout = 15.0
network_timeout = 20.0
def on_start(self):
# Required to use the http_proxy & https_proxy
self.client.trust_env = True
print("New user started")
self.client.timeout = 5
self.client.get("/")
self.client.get("/favicon.ico")
self.getcount = 0
def on_stop(self):
print("User stopped")
#task
def track_and_trace(self):
url = URL_LIST[random.randrange(0,len(URL_LIST))]
self.client.get(url, name=url[:50])
self.getcount += 1
if self.getcount > 50 and (random.randrange(0,1000) > 990 or self.getcount > 200):
print(f'Reschedule after {self.getcount} requests.')
self.client.cookies.clear()
self.getcount = 0
raise RescheduleTaskImmediately
Each locust runs in a thread. If the thread gets blocked, it doesn't take further actions.
self.client.get(url, name=url[:50], timeout=.1)
Something like this is probably what you need, potentially with a try/except to do something different when you get an http timeout exception.
In my experience, the problem you're describing with the charts on the Locust UI has nothing to do with the errors your Locust users are hitting. I've seen this behavior if you have multiple people attempting to access the Locust UI simultaneously. Locust uses Flask to create and serve the UI. Flask by itself (at the way Locust is using it) doesn't do well with multiple connections.
If Person A starts using Locust UI and starts a test, they'll see stats and everything working fine until Person B loads the Locust UI. Person B will then see things working fine but Person A will experience issues as you describe, with the test seemingly stalling and charts not updating properly. In that state, sometimes starting a new test would resolve it temporarily, other times you need to refresh. Either way, A and B would be fighting between each other for a working UI.
The solution in this case would be to put Locust behind a reverse proxy using something such as Nginx. Nginx then maintains a single connection to Locust and all users connect through Nginx. Locust's UI should then continue to work for all connected users with correctly updating stats and charts.
Probably a simple question. I'm a newbie
I have a local computer (computer 'A') and a remote computer (computer 'B').
I want to run a bokeh server on B and have the results show up in A's browser when I browse to localhost:8000.
First I created this file on B. It just has a simple plot with a slider. You slide the slider and the plot changes. It works when I run it on A.
import sys
import numpy as np
from tornado.ioloop import IOLoop
from bokeh.application.handlers import FunctionHandler
from bokeh.application import Application
from bokeh.layouts import column
from bokeh.models import ColumnDataSource, Slider
from bokeh.plotting import figure
from bokeh.server.server import Server
def modify_doc(doc):
# Initialize the data
power = 1
x = np.arange(10)
y = x**power
# Initialize the plot and slider
p = figure()
p_source = ColumnDataSource(data=dict(x=x, y=y))
p.line('x', 'y', source=p_source)
s = Slider(start=0, end=10, value=1, step=.1, title="multiplier")
# When the slider is changed, redraw the plot
def callback(attr, old, new):
x = p_source.data['x']
y = x**s.value
p_source.data = dict(x=x, y=y)
s.on_change('value', callback)
doc.add_root(column([p, s]))
def main(_):
io_loop = IOLoop.current()
bokeh_app = Application(FunctionHandler(modify_doc))
server_kwargs = {}
server_kwargs['port'] = 8000
server = Server({'/': bokeh_app}, io_loop=io_loop, **server_kwargs)
server.start()
io_loop.add_callback(server.show, "/")
io_loop.start()
if __name__ == '__main__':
main(sys.argv[1:])
So I copy this file to B and I run it on B by ssh'ing in and typing:
python barebones.py
Then on A I type:
ssh root#123.123.123.123 -N -D 7000
Note I typed 7000 not 8000. I've tried both; I don't understand why some things I've read online tell me to use a different number there.
Next I open Firefox > Preferences > Network proxy > Settings > Manual proxy configuration. I set SOCKS host to 'localhost' (without quotes) and port to 7000 and choose SOCKSv5. Click OK. In Firefox's address bar, browse to http://localhost:8000. (Note I said 8000 there not 7000. Again, not sure if that is correct but I've tried various permutations.) It says "Unable to connect".
I tried inserting this in the script but it didn't seem to have any effect. Maybe I'm way off here:
from bokeh.server.util import create_hosts_whitelist
allow_websocket_origin = ['0.0.0.0:8000']
port=8000
create_hosts_whitelist(allow_websocket_origin, port)
I know there is another way to run a bokeh server using "bokeh serve ..." command at command line, but I'm trying to create a standalone script if possible.
Gah. This should probably be simple. What am I missing?
Have you tried the following?
On B, run:
bokeh serve filename.py --address 0.0.0.0
You can then access the application from another computer with the URL http://Bs_name_or_ip_address:5006/. If there is a firewall running on the B computer, you have to allow incoming traffic on the port.
To override the default port number (5006), use the --port NNNN argument.
See also:
https://docs.bokeh.org/en/latest/docs/reference/command/subcommands/serve.html
following command worked for bokeh 1+ versions
bokeh serve --show filename.py --allow-websocket-origin=*:5006
I know this question is really old but I run into a similar problem and managed to solve it using the suggestion posted here: https://groups.google.com/a/continuum.io/forum/#!topic/bokeh/CmweAdzkWXw
Maybe this helps someone else:
On the node where Bokeh will run:
bokeh serve &
ssh -NfR 5006:localhost:5006 user#gateway
On the local machine (e.g., laptop):
ssh -NfL localhost:5006:localhost:5006 user#gateway
Now you should be able to navigate to http://localhost:5006 to access Bokeh running on a node behind the gateway.
Adding to the above answers, sometimes bokeh says
Refusing websocket connection from Origin http: //< bokeh_server_ip_address>:8000; use
--allow-websocket-origin=: 8000 or set BOKEH_ALLOW_WS_ORIGIN=:8000 to permit this; currently we allow origins {'localhost:8000'}
bokeh serve --show file_name --address 0.0.0.0 --port=8000 --allow-websocket-origin=<bokeh_server_ip_address>:8000
bokeh_server_ip_address is the remote server ip where you want to host the bokeh application
I've some fabric tasks in my fabfile and I need to initialize, the env variable before their execution. I'm trying to use a decorator, it works but fabric always says "no host found Please specify (single)" however if I print the content of my variable "env" all seems good.
Also I call my tasks from another python script.
from fabric.api import *
from instances import find_instances
def init_env(func):
def wrapper(*args, **kwargs):
keysfolder = 'keys/'
env.user = 'admin'
env.key_filename = '%skey_%s_prod.pem'%(keysfolder, args[0])
env.hosts = find_instances(args[1])
return func(args[0], args[1])
return wrapper
#init_env
def restart_apache2(region, groupe):
print(env.hosts)
run('/etc/init.d/apache2 restart')
return True
My script which call the fabfile:
from fabfile import init_env, restart_apache2
restart_apache2('eu-west-1', 'apache2')
Output of print in restart apache2:
[u'10.10.0.1', u'10.10.0.2']
Any idea why my task restart_apache2 doesn't use the env variable?
Thanks
EDIT:
Which is interesting it's if in my script which calls the fabfile, I use settings from fabric.api and set a host ip, it works. This show that my decorator has well initialized the env variable because the key and user are send to fabric. It's only the env.hosts that's not read by fabric...
EDIT2:
I can reach my goal with using settings from fabric.api, like that:
#init_env
def restart_apache2(region, groupe):
for i in env.hosts:
with settings(host_string = '%s#%s' % (env.user, i)):
run('/etc/init.d/apache2 restart')
return True
Bonus question, has there a solution to use directly the env.hosts without settings?
I'm guessing here a little, but I'm assuming you've got into trouble because you're trying to solve two problems at once.
The first issue relates to the issue of multiple hosts. Fabric includes the concepts of roles, which are just groups of machines that you can issue commands to in one go. The information in the find_instances function could be used to populate this data.
from fabric import *
from something import find_instances
env.roledefs = {
'eu-west-1' : find_instances('eu-west-1'),
'eu-west-2' : find_instances('eu-west-2'),
}
#task
def restart_apache2():
run('/etc/init.d/apache2 restart')
The second issue is that you have different keys for different groups of servers. One way to resolve this problem is to use an SSH config file to prevent you from having to mix the details of the keys / users accounts with your fabric code. You can either add an entry per instance into your ~/.ssh/config, or you can use local SSH config (env.use_ssh_config and env.ssh_config_path)
Host instance00
User admin
IdentityFile keys/key_instance00_prod.pem
Host instance01
User admin
IdentityFile keys/key_instance01_prod.pem
# ...
On the command line, you should then be able to issue the commands like:
fab restart_apache2 -R eu-west-1
Or, you can still do single hosts:
fab restart_apache2 -H apache2
In your script, these two are equivalent to the execute function:
from fabric.api import execute
from fabfile import restart_apache2
execute(restart_apache2, roles = ['eu-west-1'])
execute(restart_apache2, hosts = ['apache2'])