I'm trying to use Pip behind a proxy server which requires authentication. I've installed cntlm and filled out the hashed passwords. When I run this:
cntlm -c cntlm.ini -I -M http://www.google.co.uk
I enter my password and then get this as a result:
Config profile 1/4... Auth not required (HTTP code: 200)
Config profile 2/4... Auth not required (HTTP code: 200)
Config profile 3/4... Auth not required (HTTP code: 200)
Config profile 4/4... Auth not required (HTTP code: 200)
Your proxy is open, you don't need another proxy.
However, pip doesn't work, still giving me a timeout. Knowing that I don't need another proxy is all fine and dandy, but pip still times out. Port 3128 is working because I can telnet on that port and it shows as listening under netstat. So what should I do from here?
Thank you.
I have had the exact same issue.
Cntlm is used for authentication proxy servers, these statements mean that your server does not require authentication.
The pip command does have a --proxy option. Try using something like:
pip install --proxy=10.0.0.1:80 package_name
If this works, you know that you don't need authentication to access the web. If it still fails try:
pip install --proxy=user:password#10.0.0.1:80 package_name
This works to get around authentication. I have written a small cmd script to get around this in windows:
#echo off
:: GetPwd.cmd - Get password with no echo.
setlocal
<nul: set /p passwd=
for /f "delims=" %%i in ('python -c "from getpass import getpass; pwd = getpass();print pwd;"') do set passwd=%%i
echo.
::Prompt for the package name
set /p package=What package would you like to get:
::Get the package with PIP
pip install --proxy="admin:%passwd%#PROXY_ADDRESS:80" %package%
endlocal
Related
I've done alot of research, and I can't find anything which actually solves my issue.
Since basically no site accepts mitmdumps certificate for https, I want to ignore those hosts. I can access a specific website with "--ignore-hosts (ip)" like normal, but I need to ignore all HTTPS/SSL hosts.
Is there any way I can do this at all?
Thanks alot!
There is a script file called tls_passthrough.py on the mitmproxy GitHub which ignores hosts which has previously failed a handshake due to the user not trusting the new certificate. Although it does not save for other sessions.
What this also means is that the first SSL connection from this perticular host the will always fail. What I suggest you do is write out all the IPs which has failed previously into a text document and ignore all hosts which are in that text file.
tls_passthrough.py
To simply start it, you just add it with the script argument "-s (tls_passthrough.py path)"
Example,
mitmproxy -s tls_passthrough.py
you need a simple addon script to ignore all tls connections.
import mitmproxy
class IgnoreAllTLS:
def __init__(self) -> None:
pass
def tls_clienthello(self, data: mitmproxy.proxy.layers.tls.ClientHelloData):
'''
ignore all tls event
'''
# LOGC("tls hello from "+str(data.context.server)+" ,ignore_connection="+str(data.ignore_connection))
data.ignore_connection = True
addons = [
IgnoreAllTLS()
]
the latest version ( 7.0.4 for now) is not support ignore_connection feature yet,so u need to install the main source version:
git clone https://github.com/mitmproxy/mitmproxy.git
cd mitmproxy
python3 -m venv venv
activate the venv before startup the proxy
source /path/to/mitmproxy/venv/bin/activate
startup mitmproxy
mitmproxy -s ignore_all_tls.py
You can ignore all https/SSL traffic by using a wildcard:
mitmproxy --ignore-hosts '.*'
I'm running a jupyter notebook inside an ubuntu 16.04 docker container, as a non-root user, with SSL configured via a .pem file. My issue is, I can't perform the jupyter notebook stop $port command to stop the running server.
I start the notebook by executing sudo HOME=/home/seiji -u seiji jupyter notebook to change the HOME environment variable (which is chown'd as seiji).
I can perform the usual jupyter notebook list command by running it as the user (seiji) and feeding in the JUPYTER_RUNTIME_DIR environment variable where jupyter looks for json files containing server info. For example: sudo JUPYTER_RUNTIME_DIR=/jupyter/runtime -u seiji jupyter notebook list correctly returns:
https://localhost:8888/ :: /jupyter/notebooks (I specify the runtime dir in the config file in the usual way).
My issue is, I can't figure out how to execute jupyter notebook stop 8888 in a similar way. If I run it as is, it runs as root and tells me There are no running servers. If I run it as user:seiji, I run into SSL issues. As in:
> sudo JUPYTER_RUNTIME_DIR=/jupyter/runtime -u seiji jupyter notebook stop 8888
returns an error. It begins: Shutting down server on port 8888 ... but then prints the following:
SSL Error on 10 ('::1', 8888, 0, 0): [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)
My guess is that it tries a 'http' address to access the server instead of 'https', but I can't figure out how to change this.
I've also tried passing the environment variable JUPYTER_CONFIG_DIR which contains the config file listing the location of the .pem file with the line c.NotebookApp.certfile = u'/jupyter/certs/mycert.pem'. I've also tried explicitly feeding in the location of the cert when running from cmdline with --certfile=[location] but it seems this is ignored. Does anyone have any ideas?
This can happen if your certificate cannot be verified by whichever SSL libraries are used by Jupyter (I think the details have changed a bit over time). This is common if there certificate is self-signed - the default certificate stores may not be able to verify your issuer. I currently do something like this with my Jupyter setup script:
cat mycert.crt | openssl x509 -inform DER >> "$(python -c 'import certifi; print(certifi.where())')"
I believe if you already have the certificate in PEM form then you only need:
cat mycert.pem >> "$(python -c 'import certifi; print(certifi.where())')"
and then start it like this:
SSL_CERT_FILE=$(python -c 'import certifi; print(certifi.where())') jupyter notebook &
and stop similarly:
SSL_CERT_FILE=$(python -c 'import certifi; print(certifi.where())') jupyter notebook stop
The reason I use the certifi location and environment variable is that certifi appears to be the most-favoured package, and the environment setting appears to be respected by other libraries (including requests and built-in SSL modules).
The reason to also start the server with this updated file is so that notebooks can themselves connect to the server (e.g. for introspection).
I am writing a command line app in python using the click module that will SSH to Cisco network device and send configuration commands over the connection using the netmiko module. The problem I'm running into is that SSH-ing to the network device requires a hostname/IP, username, and password. I am trying to implement a way for a user of my script to login to a device once and keep the SSH connection open, allowing subcommands to be run without logging in each time. For example,
$ myapp ssh
hostname/IP: 10.10.110.10
username: user
password: ********
Connected to device 10.10.101.10
$ myapp command1
log output
$ myapp --option command2
log output
$ myapp disconnect
closing connection to 10.10.101.10
How would I go about storing/handling credentials to allow this functionality in my cli? I have seen recommendations of caching or OAuth in researching this issue, but I'm still not sure how to implement this or what the recommended and safe way to do this is.
Perhaps you are attempting something like this.
$ myapp ssh -u user -p password
(myapp) command1
(myapp) command2
(myapp) disconnect
$
Python has a standard library module cmd that may help:
https://docs.python.org/3.5/library/cmd.html
When trying to traverse a SOCKS5 proxy to a RHEL5 Linux host using Fabric 1.6, the command returns but no output is returned to the stdout.
$> fab -H myhost -f ./fabfile.py remote_test --show=debug
Using fabfile '/home/myuser/fabric/fabfile.py'
Commands to run: remote_test
Parallel tasks now using pool size of 1
[myhost] Executing task 'remote_test'
[myhost] run: echo testing
Enter SOCKS5 password for myuser:
[myhost] Login password for 'myuser':
$> echo $?
0
$>
The remote_test function is:
def remote_test():
run('echo testing')
If I run the command against a non SOCKS5 host it works fine.
I am running the latest builds, although I haven't to date gotten this to work:
Python 2.7.3
Paramiko == 1.10.0
pycrypto == 2.6
fabric == 1.6.0
RHEL5.9
openssh-4.3p2-82.el5
My ~/.ssh/config looks like the following:
Host *.domain
ProxyCommand connect -S socksproxy.domain:1080 %h %p
And using the connect binary built from http://www.meadowy.org/~gotoh/ssh/connect.c
I haven't got access to github from the Company network so I will ask there when I get a chance as well.
Has anyone got any ideas why this could be occuring?
Thanks
Matt
I use connect rather than fabric but the answer is surely the same. There is an explination in connect.c that the SOCKS5_PASSWORD, HTTP_PROXY_PASSWORD, and CONNECT_PASSWORD do what you want. I've a script called ssh-tbb that goes as follows.
#!/bin/bash
export CONNECT_PASSWORD=""
exec ssh -o ProxyCommand="connect -5 -S 127.0.0.1:9150 %h %p" $*
Ideally, one should call this script ssh-tor and detect if tor lives on port 9050 or 9150 of course.
I am using Fabric to run commands on a remote server. The user with which I connect on that server has some sudo privileges, and does not require a password to use these privileges. When SSH'ing into the server, I can run sudo blah and the command executes without prompting for a password. When I try to run the same command via Fabric's sudo function, I get prompted for a password. This is because Fabric builds a command in the following manner when using sudo:
sudo -S -p <sudo_prompt> /bin/bash -l -c "<command>"
Obviously, my user does not have permission to execute /bin/bash without a password.
I've worked around the problem by using run("sudo blah") instead of sudo("blah"), but I wondered if there is a better solution. Is there a workaround for this issue?
Try passing shell=False to sudo. That way /bin/bash won't be added to the sudo command. sudo('some_command', shell=False)
From line 503 of fabric/operations.py:
if (not env.use_shell) or (not shell):
real_command = "%s %s" % (sudo_prefix, _shell_escape(command))
the else block looks like this:
# V-- here's where /bin/bash is added
real_command = '%s %s "%s"' % (sudo_prefix, env.shell,
_shell_escape(cwd + command))
You can use:
from fabric.api import env
# [...]
env.password = 'yourpassword'
In your /etc/sudoers file add
user ALL=NOPASSWD: some_command
where user is your sudo user and some_command the command you want to run with fabric, then on the fabric script run sudo it with shell=False:
sudo('some_command', shell=False)
this works for me
In your /etc/sudoers file, you could add
user ALL=NOPASSWD: /bin/bash
...where user is your Fabric username.
Obviously, you can only do this if you have root access, as /etc/sudoers is only writable by root.
Also obviously, this isn't terribly secure, as being able to execute /bin/bash leaves you open to essentially anything, so if you don't have root access and have to ask a sysadmin to do this for you, they probably won't.
Linux noob here but I found this question while trying to install graphite-fabric onto an EC2 AMI. Fabric kept prompting for a root password.
The evntual trick was to pass in the ssh private key file to fabric.
fab -i key.pem graphite_install -H root#servername
You can also use passwords for multiple machines:
from fabric import env
env.hosts = ['user1#host1:port1', 'user2#host2.port2']
env.passwords = {'user1#host1:port1': 'password1', 'user2#host2.port2': 'password2'}
See this answer: https://stackoverflow.com/a/5568219/552671
I recently faced this same issue, and found Crossfit_and_Beer's answer confusing.
A supported way to achieve this is via using env.sudo_prefix, as documented by this github commit (from this PR)
My example of use:
env.sudo_prefix = 'sudo '