I've done alot of research, and I can't find anything which actually solves my issue.
Since basically no site accepts mitmdumps certificate for https, I want to ignore those hosts. I can access a specific website with "--ignore-hosts (ip)" like normal, but I need to ignore all HTTPS/SSL hosts.
Is there any way I can do this at all?
Thanks alot!
There is a script file called tls_passthrough.py on the mitmproxy GitHub which ignores hosts which has previously failed a handshake due to the user not trusting the new certificate. Although it does not save for other sessions.
What this also means is that the first SSL connection from this perticular host the will always fail. What I suggest you do is write out all the IPs which has failed previously into a text document and ignore all hosts which are in that text file.
tls_passthrough.py
To simply start it, you just add it with the script argument "-s (tls_passthrough.py path)"
Example,
mitmproxy -s tls_passthrough.py
you need a simple addon script to ignore all tls connections.
import mitmproxy
class IgnoreAllTLS:
def __init__(self) -> None:
pass
def tls_clienthello(self, data: mitmproxy.proxy.layers.tls.ClientHelloData):
'''
ignore all tls event
'''
# LOGC("tls hello from "+str(data.context.server)+" ,ignore_connection="+str(data.ignore_connection))
data.ignore_connection = True
addons = [
IgnoreAllTLS()
]
the latest version ( 7.0.4 for now) is not support ignore_connection feature yet,so u need to install the main source version:
git clone https://github.com/mitmproxy/mitmproxy.git
cd mitmproxy
python3 -m venv venv
activate the venv before startup the proxy
source /path/to/mitmproxy/venv/bin/activate
startup mitmproxy
mitmproxy -s ignore_all_tls.py
You can ignore all https/SSL traffic by using a wildcard:
mitmproxy --ignore-hosts '.*'
Related
I have an old fabfile.py (Fabric 1.8.3) which has the following line:
env.key_filename = '/etc/appliance/fabric/id_rsa'
How do I do the same in Fabric 2.0.1? I tried using fab with -i option. But it doesn't seem to be working.
You can specify key_filename as part of connect_kwargs when constructing a Connection: fabric.connection.Connection
Alternatively you can set connect_kwargs's key_filename key in your configuration file.
I would like to display "hello world" via MPI on different Google cloud compute instances with the help of the following code:
from mpi4py import MPI
size = MPI.COMM_WORLD.Get_size()
rank = MPI.COMM_WORLD.Get_rank()
name = MPI.Get_processor_name()
print("Hello, World! I am process/rank {} of {} on {}.\n".format(rank, size, name))
.
The problem is, that even so I can ssh-connect across all of these instances without problem, I get a permission denied error message when I try to run my script. I use following command to envoke my script:
mpirun --host localhost,instance_1,instance_2 python hello_world.py
.
And get the following error message:
Permission denied (publickey).
--------------------------------------------------------------------------
ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
one or more nodes. Please check your PATH and LD_LIBRARY_PATH
settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
(e.g., on Cray). Please check your configure cmd line and consider using
one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
--------------------------------------------------------------------------
.
Additional information:
I installed open-MPI on all of my nodes
I have Google automatically set all of my ssh-keys by using gcloud to log into each instance from each instance
instance-type: n1-standard-1
instance-OS: Linux Debian (default)
.
Thanks you for your help :-)
.
New Information:
(thanks # Zulan for pointing out that I should edit my previous post instead of creating a new answer for new information)
So, I tried to do the same with mpich instead of openmpi. However, I run into a similar error message.
Command:
mpirun --host localhost,instance_1,instance_2 python hello_world.py
.
Error message:
Host key verification failed.
.
I can ssh-connect between my two instances without problems, and through the gcloud commands the ssh-keys should automatically be set up properly.
So, has somebody an idea what the problem could be? I also checked the path, the firewall rules, and my ability to write startup scripts in the temp-folder. Can someone please try to recreate this problem? + Should I raise this question to Google? (never done such thing before, Im quite unsure :S)
Thanks for helping :)
so I finally found a solution. Wow, problem was driving me nuts.
So it turned out, that I needed to generate ssh-keys manually for the script to work. I have no idea why, because google-services already set up the keys by using
gcloud compute ssh , but well, it worked :)
Steps I did:
instance_1 $ ssh-keygen -t rsa
instance_1 $ cd .ssh
instance_1 $ cat id_rsa.pub >> authorized_keys
instance_1 $ gcloud compute copy-files id_rsa.pub
instance_1 $ gcloud compute ssh instance_2
instance_2 $ cd .ssh
instance_2 $ cat id_rsa.pub >> authorized_keys
.
I will open another topic and ask why I cannot use ssh instance_2, even so gcloud compute ssh instance_2 is working. See: Difference between the commands "gcloud compute ssh" and "ssh"
I have this Jenkins build configuration for my Django application in "Execute Windows batch command" field:
// Code is downloaded using Git plugin
virtualenv data/.venv
call data/.venv/Scripts/activate.bat
pip install -r requirements/local.txt
cd src/
python .\manage.py test
cd ..
fabric dev deploy // Build job get stuck here
All steps work OK except the last one. Jenkins gets stuck on first Fabric attempt to connect to remote server. In "Console output" is spinner keep spinning and I need to kill build manually.
When I run Fabric task manually from CLI, it works. I read about some problems with Jenkins+known_hosts, so I tried env.reject_unknown_hosts = True in fabfile to see if there is "Add to authorized keys" question.
Fabfile is pretty standard, nothing special:
#task
def dev():
env.user = "..."
env.hosts = "..."
env.key_filename = "..."
nv.reject_unknown_hosts = True
#task
def deploy():
local("python src/manage.py check") # <---- OK, output is in Jenkins
run('git reset --hard') # <---- Jenkins will freeze
run('git pull --no-edit origin master')
# etc ....
print("Done.")
These requires a password, the process is probbaly stuck asking for user's password.
Add --no-pty to the command to make sure it's not blocking and reporting the error.
It is than solved based o your specific remote/ssh/tty setup.
I'm trying to use Pip behind a proxy server which requires authentication. I've installed cntlm and filled out the hashed passwords. When I run this:
cntlm -c cntlm.ini -I -M http://www.google.co.uk
I enter my password and then get this as a result:
Config profile 1/4... Auth not required (HTTP code: 200)
Config profile 2/4... Auth not required (HTTP code: 200)
Config profile 3/4... Auth not required (HTTP code: 200)
Config profile 4/4... Auth not required (HTTP code: 200)
Your proxy is open, you don't need another proxy.
However, pip doesn't work, still giving me a timeout. Knowing that I don't need another proxy is all fine and dandy, but pip still times out. Port 3128 is working because I can telnet on that port and it shows as listening under netstat. So what should I do from here?
Thank you.
I have had the exact same issue.
Cntlm is used for authentication proxy servers, these statements mean that your server does not require authentication.
The pip command does have a --proxy option. Try using something like:
pip install --proxy=10.0.0.1:80 package_name
If this works, you know that you don't need authentication to access the web. If it still fails try:
pip install --proxy=user:password#10.0.0.1:80 package_name
This works to get around authentication. I have written a small cmd script to get around this in windows:
#echo off
:: GetPwd.cmd - Get password with no echo.
setlocal
<nul: set /p passwd=
for /f "delims=" %%i in ('python -c "from getpass import getpass; pwd = getpass();print pwd;"') do set passwd=%%i
echo.
::Prompt for the package name
set /p package=What package would you like to get:
::Get the package with PIP
pip install --proxy="admin:%passwd%#PROXY_ADDRESS:80" %package%
endlocal
I've followed the basic CherryPy tutorial (http://www.cherrypy.org/wiki/CherryPyTutorial). One thing not discussed is deployment.
How can I launch a CherryPy app as a daemon and "forget about it"? What happens if the server reboots?
Is there a standard recipe? Maybe something that will create a service script (/etc/init.d/cherrypy...)
Thanks!
Daemonizer can be pretty simple to use:
# this works for cherrypy 3.1.2 on Ubuntu 10.04
from cherrypy.process.plugins import Daemonizer
# before mounting anything
Daemonizer(cherrypy.engine).subscribe()
cherrypy.tree.mount(MyDaemonApp, "/")
cherrypy.engine.start()
cherrypy.engine.block()
There is a decent HOWTO for SysV style here.
To summarize:
Create a file named for your application in /etc/init.d that calls /bin/sh
sudo vim /etc/init.d/MyDaemonApp
#!/bin/sh
echo "Invoking MyDaemonApp";
/path/to/MyDaemonApp
echo "Started MyDaemonApp. Tremble, Ye Mighty."
Make it executable
sudo chmod +x /etc/init.d/MyDaemonApp
Run update-rc.d to create our proper links in the proper runtime dir.
sudo update-rc.d MyDaemonApp defaults 80
sudo /etc/init.d/MyDaemonApp
There is a Daemonizer plugin for CherryPy included by default which is useful for getting it to start but by far the easiest way for simple cases is to use the cherryd script:
> cherryd -h
Usage: cherryd [options]
Options:
-h, --help show this help message and exit
-c CONFIG, --config=CONFIG
specify config file(s)
-d run the server as a daemon
-e ENVIRONMENT, --environment=ENVIRONMENT
apply the given config environment
-f start a fastcgi server instead of the default HTTP
server
-s start a scgi server instead of the default HTTP server
-i IMPORTS, --import=IMPORTS
specify modules to import
-p PIDFILE, --pidfile=PIDFILE
store the process id in the given file
As far as an init.d script goes I think there are examples that can be Googled.
And the cherryd is found in your:
virtualenv/lib/python2.7/site-packages/cherrypy/cherryd
or in: https://bitbucket.org/cherrypy/cherrypy/src/default/cherrypy/cherryd
I wrote a tutorial/project skeleton, cherrypy-webapp-skeleton, which goal was to fill the gaps for deploying a real-world CherryPy application on Debian* for a web-developer. It features extended cherryd for daemon privilege drop. There's also a number of important script and config files for init.d, nginx, monit, logrotate. The tutorial part describes how to put things together and eventually forget about it. The skeleton part proposes a way of possible arrangement of CherryPy webapp project assets.
* It was written for Squeeze but practically it should be same for Wheezy.
Info on Daemonizer options
When using Daemonizer, the docs don't state the options, e.g. how to redirect stdout or stderr. From the source of the Daemonizer class you can find the options. As a reference take this example from my project:
# run server as a daemon
d = Daemonizer(cherrypy.engine,
stdout='/home/pi/Gate/log/gate_access.log',
stderr='/home/pi/Gate/log/gate_error.log')
d.subscribe()