I am using Airflow locally to execute some ETL tasks (everything is done locally using Airflow, Python and Docker), I have a task which failed.
If I could get to use Pycharm debugger it would be great. I am looking for a way so that Pycharm listen to what is happening on Airflow (localhost/airflow) so that once I run a task on Airflow I only need to jump to Pycharm to start debugging and see the logs.
I have read about remote debbug server but what I see on tutorials is that they all run their programm using Pycharm with a main function inside the file.
What I want is to launch my task through Airflow and then jump to Pycharm to see the logs and start debugging.
So I started something but I am not sure if this is the good way, when I try to add a remote interpreter to my project (preference/interpreter/add/docker compose here is what I get.
enter image description here
Related
I have a web app that I deployed to a machine that has ubuntu 20 installed
to be able to run the app I should open ssh to the ubuntu machine and then run this command
cd mywebapp
python3 app.py
it works successfully, but once I close the ssh console or reboot the machine or anything happens, it stopped and I have to repeat these commands
I tried to add it as a corn job to be run after machine reboot but it does not work
I post a question in the following link : run python app after server restart does not work using crontab
nothing work with me, and I have to make sure that this web app will always be running because it should be working to send push notification to mobile devices
can anyone please advice, I have been searching and trying for so many time
I'm not expert in it, but two solutions come in my mind:
1- Using systemd:
systemd can be responsible to keep services up.
You can write a custom unit for your app, and config it as a way to be up always.
This tutorial may be useful: writing unit
2- Using Docker:
When you have containerized app, you config it as to come up, on failure or anything like that.
Read about it here
What if you have the calling piece of Python script within a bash script and run that as a daemon:
Your bash script could like below (test.sh):
#!/bin/sh
cd desired/directory
python3 app.py
and you can run the bashscript like this by using nohup:
nohup ./test.sh 0<&- &>/dev/null &
You could refer this, if you want to store the outputs of nohup.
I am executing my Robot Framework Selenium tests in a remote machine (it's a Docker container, but I need it to be working using Podman, too... so I guess using docker commands wouldn't help me) and in this remote machine, there is an automatic process running on the background, which is producing terminal logs.
I can read this cmd output when I execute docker logs <container_id> in my terminal, but I need to get them using python and extract some info from these logs to show them in the Robot Framework test log file.
Any ideas how to do it?
I have found several ways how to execute a command in the remote machine and get the output, but here I am not executing any command, I just need to read what's being produced automatically.
Thank you for your advice
I am looking to run commands I typically would run through Heroku CLI via a python script, namely:
heroku pg:backups:capture
Historically, for running Heroku commands in Python I have used Heroku3.py, which works for me for things like restarting dynos. I am having difficulty finding a way to execute commands for addons, such as the one listed above.
Is there a way to call CLI commands through python?
I am trying to create a web application using flask. I have already gotten somewhat comfortable with using python, and have done so using spyder, inside of Anacanda Navigator. Now I am playing around with flask doing basic functions and have successful so far by testing it out in local server 127.0.0.1:5000. The problem I am having is that I cannot stop the server once I run the script in spyder. I have stopped the script and run other scripts through the console, but the local server remains the same.
The reason this is a problem for me is because when I try to change files and run a different flask script, the server does not update with the new information. For example, if I run a flask script that returns "Hello World" on the main page, and then I stop that file, open a new file that has a different flask script that returns "The sky is blue" the server does not change when I check it on chrome or any other browser. It will onyl return "Hello World"
I have been able to fix this problem by completely restarting my computer, but I am wondering if there is another way, just to restart the local server, 127.0.0.1:5000. Thank You!
Also I am using windows
I do : "Run > Configuration per file > Execute in an external system terminal",
then when you run your .py containing the app.run, it will be launched in an external console. If you close the console the server will be closed too.
To Kill the local server, you may use Ctrl+C command and not any other command. This command is also mentioned when the server is up and running.
I've been having this precise issue and have been smashing my head against the wall for a couple of hours. I posted the referenced StackOverflow question (my first actually) and it seems that running a script from inside Spyder is the wrong way to go as it leaves runaway background processes running, even after restarting Spyder.
I got the recommendation to only launch my *.py code from the command prompt. Furthermore I was told to do this:
set FLASK_APP=main1.py then set FLASK_DEBUG=1 then flask run
though I'm not sure what that does, so I will investigate. I was about to restart my computer as a last ditch effort until I looked in my Windows Task Manager and found some Python tasks running. After [end task] them both I was able to launch the updated webpage on my local host.
I'm developing a cassandra storage finder for graphite-api.
graphite-api is installed via pip and run via gunicorn so I can't just call the script with a debugger but want to use interactive debugging.
When I import pdb in my storage finder and set a breakpoint, the code will halt there, but how can I connect now to the headless running pdb in the script?
Or is my approach to this debugging problem the wrong one and this has to be done in a completely other way?
pdb gives control over to gunicorn, which is not what you want. Have a look at rpdb or other remote debugging solutions.