Saltstack salt-master service start is taking too long - python

Im in trouble with Saltstack since I started 2 diferent developments with Python using its API. Sometimes the services crashes and when I try to start them again or reboot the servers, the time it takes to start is about more than 24 hours. Logs are empty and if i start salt-master in debug mode nothing happens.
# dpkg -l| grep salt
ii salt-common 2014.1.5+ds-1~bpo70+1
ii salt-master 2014.1.5+ds-1~bpo70+1
Note: It's happening to me in two different machines. OS Debian sid

Whoa, 24 hours is a ridiculous amount of time to start up.
Have you added any custom grains, modules or external pillars?
Have you tried upgrading? 2014.1.10 is now out.

Related

Steam browser protocol failing silently when run over ssh

I am trying to launch a steam game on my computer through an ssh connection (into a Win10 machine). When run locally, the following python call works.
subprocess.run("start steam://rungameid/[gameid]", shell=True)
However, whenever I run this over an ssh connection—either in an interactive interpreter or by invoking a script on the target machine—my steam client suddenly exits.
I haven't noticed anything in the steam logs except that Steam\logs\connection_log.txt contains logoff and a new session start each time. This is not the case when I run the command locally on my machine. Why is steam aware of the different sources of this command, and why is this causing the steam connection to drop? Can anyone suggest a workaround?
Thanks.
Steam is likely failing to launch the application because Windows services, including OpenSSH server, cannot access the desktop, and, hence, cannot launch GUI applications. Presumably, Steam does not expect to run an application in an environment in which it cannot interact with the desktop, and this is what eventually causes Steam to crash. (Admittedly, this is just a guess—it's hard to be sure exactly what is happening when the crash does not seem to appear in the logs or crash dumps.)
You can see a somewhat more detailed explanation of why starting GUI applications over SSH fails when the server is run as a Windows service in this answer by domih to this question about running GUI applications over SSH on Windows.
domih also suggests some workarounds. If it is an option for you, the simplest one is probably to download and run OpenSSH server manually instead of running the server as a service. You can find the latest release of Win32-OpenSSH/Windows for OpenSSH here.
The other workaround that still seems to work is to use schtasks. The idea is to create a scheduled task that runs your command—the Task Scheduler can access the desktop. Unfortunately, this is only an acceptable solution if you don't mind waiting until the next minute at least; schtasks can only schedule tasks to occur exactly on the minute. Moreover, to be safe to run at any time, code should probably schedule the task for at least one minute into the future, meaning that wait times could be anywhere between 1–2 minutes.
There are also other drawbacks to this approach. For example, it's probably harder to monitor the running process this way. However, it might be an acceptable solution in some circumstances, so I've written some Python code that can be used to run a program with schtasks, along with an example. The code depends on the the shortuuid package; you will need to install it before trying the example.
import subprocess
import tempfile
import shortuuid
import datetime
def run_with_schtasks_soon(s, delay=2):
"""
Run a program with schtasks with a delay of no more than
delay minutes and no less than delay - 1 minutes.
"""
# delay needs to be no less than 2 since, at best, we
# could be calling subprocess at the end of the minute.
assert delay >= 2
task_name = shortuuid.uuid()
temp_file = tempfile.NamedTemporaryFile(mode="w", suffix=".bat", delete=False)
temp_file.write('{}\nschtasks /delete /tn {} /f\ndel "{}"'.format(s, task_name, temp_file.name))
temp_file.close()
run_time = datetime.datetime.now() + datetime.timedelta(minutes=delay)
time_string = run_time.strftime("%H:%M")
# This is locale-specific. You will need to change this to
# match your locale. (locale.setlocale and the "%x" format
# does not seem to work here)
date_string = run_time.strftime("%m/%d/%Y")
return subprocess.run("schtasks /create /tn {} /tr {} /sc once /st {} /sd {}".format(task_name,
temp_file.name,
time_string,
date_string),
shell=True)
if __name__ == "__main__":
# Runs The Witness (if you have it)
run_with_schtasks_soon("start steam://rungameid/210970")

How do I make my python script less laggy?

I am new to python and I've just created this script :
import os
import os.path
import time
while True:
if os.path.isfile('myPathTo/shutdown.svg'):
os.remove('myPathTo/shutdown.svg')
time.sleep(1)
os.system('cd C:\Windows\PSTools & psshutdown -d -t 0')
As you can see, this script is very short and I think there is a way to make it less laggy. On my PC, it is using about 30% of my processor :
Python stats on my pc
I don't really know why it is using so much resources, I need your help :)
A little explanation of the program :
I'm using IFTTT to send a file on my google drive which is synchronized on my pc (shutdown.svg) when I ask google home to shut down my pc.
When Python detect the file, he has to remove it and shut down the pc. I've added time between theses actions to make sure the script does not check the file too many times to reduce lag. Maybe 1 second is too short ?
I've added time between theses actions to make sure the script does not check the file too many times to reduce lag
This loop is sleeping 1 sec only before shutting down when the file is found, i.e. it never sleeps until it actually finds a file. So, put sleep(1) out of the if-condition.
Maybe 1 second is too short?
If you can, make this sleep time as long as possible.
If your only task is to shut down the PC, there are so many ways of scanning for an update like crons for regular scripts running or setting a lightweight server

Run python job every x minutes

I have a small python script that basically connects to a SQL Server (Micrsoft) database and gets users from there, and then syncs them to another mysql database, basically im just running queries to check if the user exists, if not, then add that user to the mysql database.
The script usually would take around 1 min to sync. I require the script to do its work every 5 mins (for example) exactly once (one sync per 5 mins).
How would be the best way to go about building this?
I have some test data for the users but on the real site, theres a lot more users so I can't guarantee the script takes 1 min to execute, it might even take 20 mins. However having an interval of say 15 mins everytime the script executes would be ideal for the problem...
Update:
I have the connection params for the sql server windows db, so I'm using a small ubuntu server to sync between the two databases located on different servers. So lets say db1 (windows) and db2 (linux) are the database servers, I'm using s1 (python server) and pymssql and mysql python modules to sync.
Regards
I am not sure cron is right for the job. It seems to me that if you have it run every 15 minutes but sometimes a synch takes 20 minutes you could have multiple processes running at once and possibly collide.
If the driving force is a constant wait time between the variable execution times then you might need a continuously running process with a wait.
def main():
loopInt = 0
while(loopInt < 10000):
synchDatabase()
loopInt += 1
print("call #" + str(loopInt))
time.sleep(300) #sleep 5 minutes
main()
(obviously not continuous, but long running) You can set the result of while to true and it will be continuous. (comment out loopInt += 1)
Edited to add: Please see note in comments about monitoring the process as you don't want the script to hang or crash and you not be aware of it.
You might want to use a system that handles queues, for example RabbitMQ, and use Celery as the python interface to implement it. With Celery, you can add tasks (like execution of a script) to a queue or run a schedule that'll perform a task after a given interval (just like cron).
Get started http://celery.readthedocs.org/en/latest/

Python crontab stops without error

I have a Python/Django application running on an Ubuntu Server. The application updates the stock for a webshop.
I have made a job in crontab to update the stock every night.
# daily stock update, starts at 23:30
30 23 * * * ./Cronjobs/stockUpdate.sh
stockUpdate.sh:
#!/bin/bash
cd Projecten
source booksVenv/bin/activate
cd Books/books
cat stockUpdate.py | python manage.py shell
stockUpdate.py:
from core.models import Stock
Stock().getAllStock()
Running Stock().getAllStock() by hand works fine.
Example: I login on the server via ssh, start the virtual environment, start django shell and run the getAllStock method.
However, contrab just seems to stop while running getAllStock without an error. A log is placed in /var/mail/.
When I open the file with nano here is what I get.
# more than 500 pages of prints
Go to next page.
>>>
Here is what I think could be going wrong:
* I use too many print statements in my code, they mess up the job.
* The job starts at 23:30 but takes a few hours, it is stopped the next day (after a half hour).
Can someone please tell me why this is occurring and give me some details as to how to debug and fix the issue.

Redis Crash Windows Server 2003 R2

I’m running redis, 32bit, 2.0.2 from the cygwin compilation here: http://code.google.com/p/servicestack/wiki/RedisWindowsDownload
I am running it from the terminal. It works great for about 24 hours and then it crashes, no errors, it just closes. My config file I have defaults except:
# save 900 1
# save 300 10
# save 60 10000
appendonly no
appendfsync no
I tried using a newer version of redis. Redis-2.2.5 win32 here: https://github.com/dmajkic/redis/downloads
However, these I can run but it throws up ‘unpacking too many values’ error when task are added onto it with Celery 2.2.6.
I haven’t ran this long enough to see if it experiences the same crashing error that 2.0.2 has after 24 hours-ish.
Also I have redis flushdb at 1am every day. But the crash could happen any part of the day, normally around 24 hours since the last time it crashed.
Any thoughts?
Thanks!
additions
Sorry, I forgot to mention that Twisted is polling data every 20 seconds and storing it into redis, which roughly translates to close to a 700 thousand records a day or 4 or 5 gb of RAM used. There is no problem with Twisted, I just thought it might be relevant to the question.
follow up question?
Thanks Dhaivat Pandya!
Are there key-value database that are more supportive of the windows environment?
Redis does is not supposed to work with Windows, and the projects that try to make it work with windows all have numerous bugs that make them unstable.

Categories