Slow Xvnc startup in a Kubernetes pod slave - python

We are using Xvnc in order to execute UI tests. The tests are written in Python and utilizes Jenkins Pipeline structure.
Everything worked fine until we ported our slaves to containers instead of VMs. The test session times are identical, however, loading the Xvnc wrapper takes around 2 minutes to turn on and shut off.
I tried looking around the web and didn’t found any solution.
What I’m thinking is that the plug-in implementation does cause hiccups, as when I’m trying to run the startup command using shell script stage, it works quickly.
Anyone faces it?

Related

Service to trigger and run python scripts?

So far when dealing with web scraping projects, I've used GAppsScript, meaning that I can easily trigger the script to be run once a day.
Is there an equivalent service when dealing with python scripts? I have a RaspberryPi, so I guess I can keep it on 24/7 and use cronjobs to trigger the script daily. But that seems rather wasteful, since I'm talking about a few small scripts that take only a few seconds to run.
Is there any service that allows me to trigger a python script once a day? (without a need to keep a local machine on 24/7) The simpler the solution the better, wouldn't want to overengineer such a basic use case if a ready-made system already exists.
The only service I've found so far to do this with is WayScript and here's a python example running in the cloud. The free tier that should be enough for most simple/hobby-tier usecases.

Autoscaler Launching a simple python script on an AWS ray cluster with docker EXAMPLES

I am finding with ray a serious lack of documentation for the autoscaling. I cannot get anything to work.
Does anyone know of any basic examples of autoscaling with aws that I can build into. ie. dockerfile (or without docker, not fussy at this point), config.yaml simple_ray_script.py
or anything at all that I can just download and run.
The examples I have tried with minimal.yaml are too simple, such that any changes to the config, ie applying a conda env stops workers from being initiate and a multitude of other issues. The examples don't work for me either in ray project
That would be great.
so far I have found pretty much nothing that works. I just want to run a simple ray python script WITH dependencies that will also launch and run on all workers, NOT just on head cluster.

Jenkins for running a background Script?

I wrote a python script to send Data from a local DB via REST to Kafka.
My goal: I would like this script to run indefinitely, by either restarting in set intervals (i.e. every 5min) or whenever the DB gets new entries. I assume the set Intervals thing would be good enough, easier and safer.
Someone suggested to me to either run it via a cronjob and use a monitoring tool or do it using jenkins (which he considered better).
My Setting: I am not a DevOps engineer, and would like to know about the possibilities and risks setting this Script up. It would be no trouble to recreate the Script in Java if this improves the situation.
My Question: I did try to learn what jenkins is about and i think i understood the CI and CD part. But i don't see how this could help me with my goal. Can someone elaborate on this with some experience on this topic?
If you would suggest a cronjob, what are common methods or tools to monitor such a case? I think the main risks are, failing to send the data due to connection issues on the local machine to REST or the local DB or not beieng started properly at the specified time.
Jobs can be scheduled at regular intervals in Jenkins just like with cron, in fact it uses the same syntax. What's nice about scheduling the job via Jenkins, is that it's very easy to have it send an email if the job exits with a non-zero return code. I've moved all of my cron jobs into Jenkins and it's working well. So by running it via Jenkins you're covering the execution side and the monitoring side at the same time.

How can a few small Python scripts be run periodically with Docker?

I currently have a handful of small Python scripts on my laptop that are set to run every 1-15 minutes, depending on the script in question. They perform various tasks for me like checking for new data on a certain API, manipulating it, and then posting it to another service, etc.
I have a NAS/personal server (unRAID) and was thinking about moving the scripts to there via Docker, but since I'm relatively new to Docker I wasn't sure about the best approach.
Would it be correct to take something like the Phusion Baseimage which includes Cron, package my scripts and crontab as dependencies to the image, and write the Dockerfile to initialize all of this? Or would it be a more canonical approach to modify the scripts so that they are threaded with recursive timers and just run each script individually in it's own official Python image?
No dude just install python on the docker container/image, move your scripts and run them as normal.
You may have to expose some port or add firewall exception but your container can be as native linux environment.

Running a python script that logs into a spark EC2 cluster and then runs a script.

Is there documentation on running a script that can log into a spark cluster and run a script? I've been able to launch clusters with Linux Bash scripts but I'm wondering if there is anything more general (python would be great). I would like to have a script that reads certain parameters and then runs them automatically without having to have the user log in. I want this to be as easy and intuitive as possible (so someone less tech savy can just start the script without having to worry about dealing with spark or AWS) or be able to run in the background of a web app.

Categories