I've got a small application (https://github.com/tkoomzaaskz/cherry-api) and I would like to integrate it with travis. In fact, travis is probably not important here. My question is how can I configure a build/job to execute the following sequence:
start the server that serves the application
run tests
close the server (which means close the build)
The application is written in python/CherryPy (basic webapp framework). On my localhost I do it using two consoles. One runs the server and another one runs the tests - it's pretty easy and works fine. But when I want to execute all this in the CI environment, I fall in trouble - I'm unable to gain control after the server is started, because the server process waits for requests... and waits... and waits... and tests are never run (https://travis-ci.org/tkoomzaaskz/cherry-api/builds/10855029 - this build is infinite). Additionally, I don't know how to close the server. This is my .travis.yml:
before_script: python src/hello.py
script: nosetests
src/hello.py starts the built-in CherryPy server (listens on localhost:8080). I know I can move it to the background by adding the &: before_script: python src/hello.py & but then I shall find the process ID in the CI-environment and kill the process which seems very very dirty solution and I guess there's something better than that.
I'd appreciate any hints on how can I configure this.
edit: I've configured this dirty run in the background and then kill the process in this file. The build passes now. Still, I think it's ugly...
Related
I have some python scripts that I look to run daily form a Windows PC.
My current workflow is:
The desktop PC stays all every day except for a weekly restart over the weekend
After the restart I open VS Code and run a little bash script ./start.sh that kicks off the tasks.
The above works reasonably fine, but it is also fairly painful. I need to re-run start.sh if I ever close VS Code (eg. for an update). Also the processes use some local python libraries so I need to stop them if I'm going to update them.
With regards to how to do this properly, 4 tools came to mind:
Windows Scheduler
Airflow
Prefect (https://www.prefect.io/)
Rocketry (https://rocketry.readthedocs.io/en/stable/)
However, I can't quite get my head around the fundamental issue that Prefect/Airflow/Rocketry run on my PC then there is nothing that will restart them after the PC reboots. I'm also not sure they will give me the isolation I'd prefer on these tools.
Docker came to mind, I could put each task into a docker image and run them via some form of docker swarm or something like that. But not sure if I'm re-inventing the wheel.
I'm 100% sure I'm not the first person in this situation. Could anyone point me to a guide on how this could be done well?
Note:
I am not considering running the python scripts in the cloud. They interact with local tools that are only licenced for my PC.
You can definitely use Prefect for that - it's very lightweight and seems to be matching what you're looking for. You install it with pip install prefect, start Orion API server: prefect orion start and once you create a Deployment, and start an agent prefect agent start -q default you can even configure schedule from the UI
For more information about Deployments, check our FAQ section.
It sounds Rocketry could also be suitable. Rocketry can shut down itself using a task. You could do a task that:
Runs on the main thread and process (blocking starting new tasks)
Waits or terminates all the currently running tasks (use the session)
Calls session.shut_down() which sets a flag to the scheduler.
There is also a app configuration shut_cond which is simply a condition. If this condition is True, the scheduler exits so alternatively you can use this.
Then after the line app.run() you simply have a line that runs shutdown -r (restart) command on shell using a subprocess library, for example. Then you need something that starts Rocketry again when the restart is completed. For this, perhaps this could be an answer: https://superuser.com/a/954957, or use Windows scheduler to have a simple startup task that starts Rocketry.
Especially if you had Linux machines (Raspberry Pis for example), you could integrate Rocketry with FastAPI and make a small cluster in which Rocketry apps communicate with each other, just put script with Rocketry as a startup service. One machine could be a backup that calls another machine's API which runs Linux restart command. Then the backup executes tasks until the primary machine answers to requests again (is up and running).
But as the author of the library, I'm possibly biased toward my own projects. But Rocketry very capable on complex scheduling problems, that's the purpose of the project.
You can use schtasks for windows to schedule the tasks like running bash script or python script and it's pretty reliable too.
I am new to rabbitmq. In all tutorials of rabbitmq in python/php says that receiver side
php receiver.php
or
python receiver.py
but how can we do this in production?
if we have to run above command in production either we have to use & at last or we have to use nohup. Which is not a good idea?
How to implement rabbitmq receiver in production server in php/python?
Consumers/Receivers tend to be managed by a process controller. Either initd, systemd can work. What I saw used a lot more is something like http://supervisord.org/ or http://godrb.com/ or https://mmonit.com/
In production you ideally want to not only have something that will make sure a process is running, but also that the logs are separated and rolled, that you have some amount of monitoring to make sure a process is not just constantly restarting at boot or otherwise. Those tools are better adapted than running by hand.
Hi I just started working with Pelican and it really suits my needs, I had tried to build blogs in Flask and other frameworks but I really just wanted something simple so I can post about math, and pelican just works.
My question is when I am testing on my machine, I start the server; however when I stop the server to make some edits to my test blogs, and then try to reload the server I get a socket already in use error. I am stopping my server by ctrl+z am I doing this correctly?
Use ctrl+c to terminate the process. ctrl+z will only send it's execution to the background.
On a separate note, since you are making changes and want to test them it would be more convenient to use $ make devserver instead of $ make serve. See docs.
For your development server, you can also use the script ./develop_server.sh that comes with the last versions of pelican (at least with the 3.5.0).
Build the blog and load a server with ./develop_server.sh start: it reloads each time you edit your blog (except the settings). Simply stop with ./develop_server.sh stop when you're finished.
When you press Ctrl + C or Ctrl + z, do not restart the HTTP server: it is running in the background, and that is the exact reason as why you are getting that error message.
To see that the server is running in the background after pressing the any of the key combinations above, try to edit and save any file: you will see right away in the terminal the re-generation process of your pages is active again.
You can start the HTTP server using this command: make devserver and stop by ./developer_server.sh stop
I'm using gitlab-ci to automatically build a C++ project and run unit-tests written in python (it runs the daemon, and then communicates via the network/socket based interface).
The problem I'm finding is that when the tests are run by the GitLab-CI runner, they fail for various reasons (with one test, it stalls indefinitely on a particular network operation, on the other it doesn't receive a packet that should have been sent).
BUT: When I open up SSH and run the tests manually, they all work successfully (the tests also succeed on all of our developers' machines [linux/windows/OSX]).
At this point I've been trying to replicate enough of the build/test conditions that gitlab-ci is using but I don't really know any exact details, and none of my experiments have reproduced the problem.
I'd really appreciate help with either of the following:
Guidance on running the tests manually outside of gitlab-ci, but replicating its environment so I can get the same errors/failures and debug the daemon and/or tests, OR
Insight into why the test would fail when ran by GitLab-CI-Runner
Sidetrack 1:
For some reason, not all the (mostly debugging) output that would normally be sent to the shell shows up in the gitlab-ci output.
Sidetrack 2:
I also played around setting it up with jenkins, but one of the tests fails to even connect to the daemon, while the rest do it fine.
-i usually replicate the problem by using a docker container only for the runner and running the tests inside it, dont know if you have it setup like this =(.
-Normally the test doesnt actually fail if you log in the container you will see he actually does everything but doesnt report back to the Gilab CI, dont freak out it does it job it simply does not say it.
PS: you can see if its actually running by checking the processes on the machine.
example:
im running a gitlab ci with java and docker:
gitlab ci starts doing its thing then hangs at a download,meanwhile i log in the container and check that he is actually working and manages to upload my compiled docker image.
I need a reliable way to check if a Twisted-based server, started via twistd (and a TAC-file), was started successfully. It may fail because some network options are setup wrong. Since I cannot access the twistd log (as it is logged to /dev/null, because I don't need the log-clutter twistd produces), I need to find out if the Server was started successfully within a launch-script which wraps the twistd-call.
The launch-script is a Bash script like this:
#!/usr/bin/bash
twistd \
--pidfile "myservice.pid" \
--logfile "/dev/null" \
--python \
myservice.tac
All I found on the net are some hacks using ps or stuff like that. But I don't like an approach like that, because I think it's not reliable.
So I'm thinking about if there is a way to access the internals of Twisted, and get all currently running Twisted applications? That way I could query the currently running apps for the the name of my Twisted application (as I named it in the TAC-file) to start.
I'm also thinking about not using the twistd executable but implementing a Python-based launch script which includes the twistd-content, like the answer to this question provides, but I don't know if that helps me in getting the status of the server to run.
So my question is just: is there a reliable not-ugly way to tell if a Twisted Server started with twistd was started successfully, when twistd-logging is disabled?
You're explicitly specifying a PID file. twistd will write its PID into that file. You can check the system to see if there is a process with that PID.
You could also re-enable logging with a custom log observer which only logs your startup event and discards all other log messages. Then you can watch the log for the startup event.
Another possibility is to add another server to your application which exposes the internals you mentioned. Then try connecting to that server and looking around to see what you wanted to see (just the fact that the server is running seems like a good indication that the process started up properly, though). If you make it a manhole server then you get the ability to evaluate arbitrary Python code, which lets you inspect any state in the process you want.
You could also just have your application code write out an extra state file that explicitly indicates successful startup. Make sure you delete it before starting the application and you'll have a fine indicator of success vs failure.