I've tried to create a twill test that changes the proxy server settings of 2 different tests. I need to trigger this change in runtime without relaunching the test script.
I've tried to use the "http_proxy" environment variable by setting os.environ["HTTP_PROXY"], but it's only changes the proxy setting for the first test, and does not works on the second and third tests.
Could you please suggest a way to change twill's proxy settings on runtime ?
Set the proxy environment variable before you run the twill script.
sh/ksh/bash
export HTTP_PROXY=blah:8080
csh
setenv HTTP_PROXY blah:8080
It's worth nothing, this should work by setting os.environ['http_proxy'], but it might not if you set it after you import twill. Twill may be checking this once on startup? The only 100% safe way I would imagine is exporting the variable so that all further child processes will get it as their environment.
Related
I am currently using python to write some appium test. Because I am behind a corporate firewall my traffic needs to go via a proxy.
I have set my http_proxy and https_proxy variables, but it seems like this is not being picked up by python during execution.
I tried the exact same test using javascript and node and the proxy get picked up and everything works so I am sure the problem is python not following the proxy settings.
How can I make sure python is using correct proxy settings?
I am using python 2.7 on macos mojave
Thanks!
So I figured out that appium currently does not support options to provide a proxy when making a remote connection. As temporary solution I modified the remote_connection module of selenium that appium inherits forcing it to use a proxy url for the connection.
My python knowledge is not that good but I think it shoudnt take much effort for someone to make a module that wraps/override the appium webdriver remote connection to include a proxy option.
I am connecting to a remote server through
ssh user#server.com
and run
python script.py
in the appropriate directory. However, I get the error
ImportError: No module named numpy
even though I know the module is installed and the script runs with no problems when I am physically logged in to that server.
None of the answers I was able to find worked (for example this, and this). Do have any ideas as to how I can run the script using ssh?
The remote server has Python 2.6.6 installed, and
which python
returns
/usr/bin/python
The remote serves runs CentOS.
See similar problem describe here: Why does an SSH remote command get fewer environment variables then when run manually?.
Compare your environment variables in the local (physical) mode to the remote mode by running env in both cases. Move missing variables from your local profile to /etc/profile. Then log out from ssh session and connect again.
Another approach: If you don't want to change anything, then after ssh switch to your user via su - <your user>. This may look weird because you already logged it with this user. The difference is, that after su all your env. variables will set like in a local (physical) mode. Advantage: it is quick. Disadvantage: You will have to do it each time you want to run your Python script. So the first approach with configuring /etc/profile may be better on the long run.
How to check what proxies are used by Python3 Requests module?
I have verified (from responses) that when you set http proxy in system configuration on MacOS but not set the http_proxy environment variable it will also be automatically used by Requests. It seems like it will use the proxies that urllib.request.getproxies() return, but I'm not sure because the document only says that
You can also configure proxies by setting the environment variables HTTP_PROXY and HTTPS_PROXY.
and it seems there aren't any descriptions on system proxy configuration.
Finally I found the solution:
requests.utils.getproxies()
or
requests.utils.get_environ_proxies(url)
I am trying to deploy a Django app on openshift (python3.3, django1.7, Openshift 2.1).
I need to set the OPENSHIFT_PYTHON_WSGI_APPLICATION to point to an alternative wsgi.py location.
I have tried using the pre_build script to set the variable, using the following commands:
export OPENSHIFT_PYTHON_WSGI_APPLICATION="$OPENSHIFT_REPO_DIR"geartest4/wsgi.py
echo "-------> $OPENSHIFT_PYTHON_WSGI_APPLICATION"
I can see during the git push that the pre_build script sets the variable correctly. The echo shows the correct path as expected. However wsgi.py does not launch and I get:
CLIENT_ERROR: WSGI application was not found
When I immediately ssh into the gear and check the environment variable I see that OPENSHIFT_PYTHON_WSGI_APPLICATION="" is not set.
If I set the variable manually from my workstation using rhc set-env OPENSHIFT_PYTHON_WSGI_APPLICATION=/var/lib/openshift/gear_name/bla/bla then the variable sticks, the wsgi server launches, and the app works fine.
The problem is that I don't want to use rhc set-env because that means I have to hardwire the gear name in the path. This becomes a problem when I want to do scaling with multiple gears.
Anyone have any ideas on how to set the variable and make stick?
The environment variable OPENSHIFT_PYTHON_WSGI_APPLICATION can be set to a relative path like this:
rhc env set OPENSHIFT_PYTHON_WSGI_APPLICATION=wsgi/wsgi.py
The openshift cartridge openshift-django17 by jfmatth uses this approach, too.
So I am running Selenium on a Ubuntu Server VM and have a minor issue. When I start-up my VM and run a Selenium test script I get this error: selenium.common.exceptions.WebDriverException: Message: 'The browser seems to have exited before we could connect'. Now if I execute this export DISPLAY=:99 in the terminal before I run any of my Selenium test scripts all works fine. All tests run great headlessly!
My questions is do any of you know how to execute this command on start-up. So I don't have to run this in the terminal before I run my Selenium test scripts. I've tried adding it to the /etc/rc.local file. But this doesn't seem to work.
I've also tried executing it at the beginning of my Selenium test scripts. By just adding this (I'm using python)
os.system("export DISPLAY=:99")
Any suggestions as to how to accomplish this?
Thanks in advance
This isn't going to work:
os.system("export DISPLAY=:99")
Because system() starts a new shell and the shell will close when finished, this influences the environment of exactly one process that is very short lived. (Child processes cannot influence the environments of their parents. Parents can only influence the environment of their children, if they make the change before executing the child process.)
You can pick a few different mechanisms for setting the DISPLAY:
Set it in the scripts that start your testing mechanism
This is especially nice if the system might do other tasks, as this will influence as little as possible. In Python, that would look like:
os.environ["DISPLAY"]=":99"
In bash(1), that would look like:
export DISPLAY=:99
Set it in the login scripts of the user account that runs the tests.
This is nice if the user account that runs the tests will never need a DISPLAY variable. (Though if a user logs in via ssh -X testinguser#machine ... this will clobber the usual ssh(1) X session forwarding.)
Add this to your user's ~/.bashrc or ~/.profile or ~/.bash_profile. (See bash(1) for the differences between the files.)
export DISPLAY=:99
Set it at login for all users. This is nice if multiple user accounts on the system will be running the testing scripts and you just want it to work for all of them. You don't care about users ever having a DISPLAY for X forwarding.
Edit /etc/environment to add the new variable. The pam_env(8) PAM module will set the environment variables for all user accounts that authenticate under whichever services are configured to use pam_env(8) in the /etc/pam.d/ configuration directory. (This sounds more complicated than it is -- some services want authenticated users to have environment variables set, some services don't.)