I am executing my Robot Framework Selenium tests in a remote machine (it's a Docker container, but I need it to be working using Podman, too... so I guess using docker commands wouldn't help me) and in this remote machine, there is an automatic process running on the background, which is producing terminal logs.
I can read this cmd output when I execute docker logs <container_id> in my terminal, but I need to get them using python and extract some info from these logs to show them in the Robot Framework test log file.
Any ideas how to do it?
I have found several ways how to execute a command in the remote machine and get the output, but here I am not executing any command, I just need to read what's being produced automatically.
Thank you for your advice
Related
I have a web app that I deployed to a machine that has ubuntu 20 installed
to be able to run the app I should open ssh to the ubuntu machine and then run this command
cd mywebapp
python3 app.py
it works successfully, but once I close the ssh console or reboot the machine or anything happens, it stopped and I have to repeat these commands
I tried to add it as a corn job to be run after machine reboot but it does not work
I post a question in the following link : run python app after server restart does not work using crontab
nothing work with me, and I have to make sure that this web app will always be running because it should be working to send push notification to mobile devices
can anyone please advice, I have been searching and trying for so many time
I'm not expert in it, but two solutions come in my mind:
1- Using systemd:
systemd can be responsible to keep services up.
You can write a custom unit for your app, and config it as a way to be up always.
This tutorial may be useful: writing unit
2- Using Docker:
When you have containerized app, you config it as to come up, on failure or anything like that.
Read about it here
What if you have the calling piece of Python script within a bash script and run that as a daemon:
Your bash script could like below (test.sh):
#!/bin/sh
cd desired/directory
python3 app.py
and you can run the bashscript like this by using nohup:
nohup ./test.sh 0<&- &>/dev/null &
You could refer this, if you want to store the outputs of nohup.
I am trying to visualise different loss function plots during training using Visdom. I am using a HPC3 system which makes use of a SLURM script and command line arguments to run the training.
I have already tried to use this tutorial https://gist.github.com/amoudgl/011ed6273547c9312d4f834416ab1d0c but I find that when I try to run it, it says the port is already in use despite the fact I cannot actually open the link itself. I tried using the demo.py script provided in the link above to try and example before changing my code, but even that does not work for me.
I am not sure if it is the nature of SLURM scripts/commands or if I have not implemented it properly. My steps are as follows (following the tutorial above):
ssh -N -f -L localhost:8097:localhost:8097 myuser#hpc3.xxx.xxx (on remote server terminal)
Using sbatch command and the slurm script:
no_proxy:localhost python demo.py
Then on my local machine (using Terminal), I activate visdom:
visdom
This has led to two outcomes: I am either unable to connect to port and get an error saying this port is already being used. Alternatively, sometimes it allows me to navigate on my browser but nothing shows up.
Any help would be greatly appreciated.
I am using Airflow locally to execute some ETL tasks (everything is done locally using Airflow, Python and Docker), I have a task which failed.
If I could get to use Pycharm debugger it would be great. I am looking for a way so that Pycharm listen to what is happening on Airflow (localhost/airflow) so that once I run a task on Airflow I only need to jump to Pycharm to start debugging and see the logs.
I have read about remote debbug server but what I see on tutorials is that they all run their programm using Pycharm with a main function inside the file.
What I want is to launch my task through Airflow and then jump to Pycharm to see the logs and start debugging.
So I started something but I am not sure if this is the good way, when I try to add a remote interpreter to my project (preference/interpreter/add/docker compose here is what I get.
enter image description here
I have an Azure VM into which I SSH in. The VM is a W10 host.
I can create files with touch, change directories and so on, but whenever I try to run a python script that is hosted on the VM I get the following error:
The system cannot execute the specified program.
At first glance I thought that there was a problem related to my pyhton alias and the PATH variable, so I decided to use RDP to log into the machine, open a CMD and try the same command, which worked just fine. The python program executed flawlessly.
I used where to find where is my python.exe located at, so whenever I run the script on my remote terminal I can do something like:
C:\Users\User01\AppData\Local\Microsoft\WindowsApps\python.exe test.py
This does result in the same error message as the one stated above.
Can I get some help?
Hi try to execute your command as below.
ssh user#machine python < script.py
I am trying to execute a python cgi script which is having a bash script that logs into remote servers to get logs. This log is seen in browser.
I am able to run this script from terminal and get expected result. Whereas trying to run this script over a browser. There is no result a blank page shows up.
After lot of analysis found that When running from browser it runs as apache user who does not have permission to login to remote servers.
I want to make it run as a particular user is that possible ?