is it possible to peep what selenium is doing during automated tests? - python

I perform headless web session tests with selenium (python, ubuntu server 15, firefox), which can last for hours. I do make use of pyvirtualdisplay + xvfb.
My python scripts begin like this:
from pyvirtualdisplay import Display
virtualdisplay = True
if virtualdisplay:
display = Display(visible=0, size=(1920, 1240))
display.start()
How is it possible to peep what's goin' on without actually get screenshots, e.g. vnc session?
I tried several solutions, but they didn't work because perhaps they are outdated or too general.

using x11vnc can do the trick. Just add this line to bash script you are using to launch tests:
x11vnc -q -bg -display $DISPLAY
After that you can connect to your virtual display on default port 5900 (or any other of your choice). Keys -q and -bg forces x11vnc to be quiet and run in background respectively.
Of course, you should set up port forwarding for SSH connection:
ssh -L 5900:localhost:5900 yourhost

Related

How to auto run python Selenium script on an ubuntu server in the backgroud

What I need
I have a Python Selenium script. When I run it on my local Ubuntu PC - it works fine
But when I uploaded it to a server I face a problem. The server has no display
I solved this problem with X Virtual Framebuffer display. What I need - is to automatically setup the display and run my script in the background
Problem
Now I run it manually the following way
I go to the terminal
Set the display with the following commands
export DISPLAY=:1
Xvfb $DISPLAY -screen $DISPLAY 1280x1024x16 &
Run the python script with command python3 products2.py
This works fine.
But I need it to run automatically in the background
I created a conf file for supervisor and run the python script with supervisor.
[program:prod]
command = /root/lowescom/l-env/bin/python3.10 /root/lowescom/lowes_project/modules/products2.py
user = root
autorestart = true
redirect_stderr = true
stdout_logfile = /root/lowescom/lowes_project/logs/debug.log
But this don't work. Even if I set up the display manually - it doesn't work
Question
How can I run my python Selenium script in the background automatically. The display setup should also be automated.
Update
I have just tried to use no-sandbox. But still not working
chrome_options = uc.ChromeOptions()
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
driver = uc.Chrome(use_subprocess=True, options=chrome_options)
if you are using chromedriver you should set the option:
chrome_options.add_argument('--no-sandbox')
for firefox you can check the pyvirtualdisplay module.

Connect to undetected-chromedriver docker image

I have been using https://hub.docker.com/r/selenium/standalone-chrome on my Synology NAS to use Selenium Webdriver to perform automated requests.
I don't remember the command I ran but I started the container and run driver = webdriver.Remote("http://127.0.0.1:4444/wd/hub") in Python to connect to the selenium chrome image.
However I have a use case that requires me to use undetected-chromedriver. How do I install something like https://hub.docker.com/r/bruvv/undetected_chromedriver and connect to it from my NAS' python terminal?
Beware, everyone can publish on docker hub and so there are numerous undetected-chromedriver's. So what you are trying to install is someone else's (failed) attempt.
official: https://hub.docker.com/r/ultrafunk/undetected-chromedriver
as per #nnhthuan 's comment, some more detail.
undetected-chromedriver will start the Chrome binary, but will do it from python instead of letting the chromedriver binary run Chrome. As undetected-chromedriver does not officially support headless mode, you'll need a way to run "windowed" chrome on docker. To make this happen, you could use Xvfb to emulate a X-server desktop. If you forget this step, you won't be able to connect to chrome as chrome closes itself down (no screens found) even before undetected-chromedriver is able to connect, and so it crashes.
To ensure xvfb keeps running, you could use for example something like this in your entrypoint:
#!/bin/bash
export DISPLAY=:1
function keepUpScreen() {
echo "running keepUpScreen()"
while true; do
sleep .25
if [ -z $(pidof Xvfb) ]; then
Xvfb $DISPLAY -screen $DISPLAY 1280x1024x16 &
fi;
done;
}
keepUpScreen &
echo "running: ${#}"
exec "$#"
once your image is running stable, you could set your chromedriver debug_host to your internal ip address instead of 127.0.0.1, and debug_port to a static value. This would enable connections from remote hosts.
Don't forget to forward them in docker.

Going back to running script in virtual environment after connection to remote server closed

I am running a Python script that collects data and is running inside a Virtual Environment hosted remotely on a VPS (Debian based).
My PC crashed and I am trying to get back into the visual logs of the python script.
I know that the script is still running because it saves its data into a CSV file. That CSV is still being written.
If I activate the source again, then I can rerun the script. It sounds to me that I will have 2 instances of the same script running in this case...
I am not familiar with the virtual environment and I cannot find the right way to do it without deactivating and reactivating it. I am running my script on the cheapest OVH VPS I could buy because my computer is clearly not reliable for running 24/7.
You might use screen to run your script in a separate terminal session. This will avoid losing logging if the ssh connection gets dropped.
The workflow would be something in the lines of (on your host):
# Install screen
$ sudo apt udpate
$ sudo apt install screen
# Start a screen session
$ screen
# Run your script
$ python myscript.py
In case of dropping your ssh connections, it'll be enough to:
# ssh back into the host from your client
# reattach previous screen session
$ screen -r
For advanced use the official docs are quite comprehensive.
Note: As a more general note, what explained above is pretty much the basic logic of a terminal mulitplexer. You'll be able to achieve the same using tmux.

How to start headless browser with Selenium on Ubuntu using bash or Python

I'm running a cron job which runs a Python script that uses Selenium. Selenium requires a display, so I've installed Xvfb, started the display, and firefox:
sudo Xvfb :10 -ac
export DISPLAY=:10
firefox
This works when I run these commands in the console, but I want to be able to do it with cron. How can I do this? If I run the virtual display as the main user, will they have access to the virtual display when it's needed by Python script/Selenium?
I would suggest you to use selenium Ghost Driver(it's running PhantomJS).

Splinter/Selenium running on Flask / Uwsgi does not see headless display

So here's my setup:
Using a flask server with uwsgi, and through a controller action, calling a python script that uses splinter (which uses selenium) to automate the gui. The web server doesn't have a display, so I'm using xvfb.
Sshing into the machine and running xvfb and exporting display=:99, and then running the python script works great. But running it through a controller action does not work - I get the following error:
WebDriverException: Message: The browser appears to have exited before we could connect.
(this is the same error that is returned when xvfb isn't running)
ps aux shows that xvfb is running as the same user as the web server (I've isolated everything, and have a separate controller action that executes:
p = subprocess.Popen("Xvfb :99 &", stdout=fstdout,stderr=fstderr, shell=True))
and DISPLAY is set to :99 on both root and the web server user.
I could install vncserver and try that, but I suspect I will end up with the same problem. I've also tried to avoid calling xvfb directly and using PyVirtualDisplay instead, but same problem.
edit: it errors on this line (if using splinter):
browser = Browser()
or, if selenium:
with pyvirtualdisplay.Display(visible=True):
binary = FirefoxBinary()
driver = webdriver.Firefox(None, binary)
(it errors on the last line there)
Any ideas?

Categories