I want to have list of all daemons/services in Linux using python.
I am able to get information using
service --status-all
However I don't want to execute terminal commands through Python. Is there any API available to perform this operation?
My project includes lots of stuff so I need to be careful with memory and cpu usage, also I may need to run the command every 10sec or 60sec depending on configuration.
So I don't want to create too many process.
Earlier I had used subprocess.check_output(command)
But I was told by my manager to avoid using commands and try to use any available packages, I tried searching some but could found packages which can only monitor services and cannot list.
Finally my objective is to minimize load on system.
Any suggestions ?
Additoinal details-
Python 3.7.2
Ubuntu 16
With psutil (sudo pip3 install psutil)
#!/usr/bin/env python
import psutil
def show_services():
return [(
psutil.Process(p).name(),
psutil.Process(p).status(),
) for p in psutil.pids()]
def main():
for service in show_services():
if service[1] == 'running':
print(service[0])
if __name__ == '__main__':
main()
show_services returns a list of tuples (name, status)
There is a package for that, pystemd.
Quoting its front page:
This library allows you to talk to systemd over dbus from python, without actually thinking that you are talking to systemd over dbus. This allows you to programmatically start/stop/restart/kill and verify services status from systemd point of view, avoiding executing subprocess.Popen(['systemctl', ... and then parsing the output to know the result.
so
pip install pystemd
and
from pystemd.systemd1 import Manager
manager = Manager()
manager.load()
print(manager.Manager.ListUnitFiles())
For efficiency, you don't need to reload list of services each time, if it hasn't changed. I don't have a right solution for that in my head, but I guess you could watch unit files directories with inotify for example and only reload when they're changed.
Related
I have a program which can optionally use tqdm to display a progress bar for long calculations.
I've packaged the program using pyinstaller.
The packaged program runs to completion just fine if I don't enable tqdm.
If I enable tqdm, the package works fine and displays progress bars until it gets to the very end, then the message below is displayed and the packaged program is relaunched (with the wrong args).
multiprocessing/resource_tracker.py:104: UserWarning: resource_tracker:
process died unexpectedly, relaunching. Some resources might leak.
There is nothing fancy about the way the program invokes tqdm.
There is a loop that iterates over the members of a list.
The program does not import the multiprocessing module.
def best_cost(self, guesses, steps=1, verbose=0) -> None:
""" find the guess with the minimum cost """
self.cost = float("inf")
for guess in guesses if verbose < 3 else tqdm(guesses, mininterval=1.0):
groups = dict()
for (word, frequency) in self.words.items():
result = calc_result(guess, word)
groups.setdefault(result, {})[word] = frequency
self.check_cost(guess, groups, steps=steps)
return
The problem occurs whether pyinstaller creates the package with --onedir or --onefile.
I'm running on MACOS Monterey V12.4 on an Intel processor.
I'm using python 3.10.4, but the problem also occurs with 3.9. Pip installed packages include: pyinstaller 5.1, pyinstaller-hooks-contrib 2022.7, and tqdm 4.64.0.
I package the program in a virtualenv.
I found this in the documentation for the multiprocessing module:
On Unix using the spawn or forkserver start methods will also start a resource tracker process which tracks the unlinked named system resources (such as named semaphores or SharedMemory objects) created by processes of the program. When all processes have exited the resource tracker unlinks any remaining tracked object. Usually there should be none, but if a process was killed by a signal there may be some “leaked” resources. (Neither leaked semaphores nor shared memory segments will be automatically unlinked until the next reboot. This is problematic for both objects because the system allows only a limited number of named semaphores, and shared memory segments occupy some space in the main memory.)
I can't figure out how to debug the the problem since it only occurs with the pyinstaller packaged program and my IDE (vscode) doesn't help.
My program doesn't use the multiprocessing module.
The tqdm documentation doesn't mention issues like this, but it does include the multiprocessing module to create locks (semaphores) in the tqdm class which
get used by the instances.
There aren't any tqdm methods I can find to delete the semaphores resources;
I guess that is just supposed to happen automatically when the program ends.
I think the pyinstaller packaged program may also use the multiprocessing
module.
Perhaps pyinstaller is interfering with deletion of the tqdm semaphores somehow?
The packaged program works fine till it starts to exit and then gets restarted.
Does anyone know how I can get the packaged program to exit cleanly?
I have a simulation inside Blender that I would like to control externally using the TensorFlow library. The complete process would go something like this:
while True:
state = observation_from_blender()
action = find_action_using_tensorflow_neural_net(state)
take_action_inside_blender(action)
I don't have much experience with the threading or subprocess modules and so am unsure as to how I should go about actually building something like this.
Rather than mess around with Tensorflow connections and APIs, I'd suggest you take a look at the Open AI Universe Starter Agent[1]. The advantage here is that as long as you have a VNC session open, you can connect a TF based system to do reinforcement learning on your actions.
Once you have a model constructed via this, you can focus on actually building a lower level API system for the two things to talk to each other.
[1] https://github.com/openai/universe-starter-agent
Thanks for your response. Unfortunately, trying to get Universe working with my current setup was a bit of a pain. I'm also on a fairly tight deadline so I just needed something that worked straight away.
I did find a somewhat DIY solution that worked well for my situation using the pickle module. I don't really know how to convert this approach into proper pseudocode, so here is the general outline:
Process 1 - TensorFlow:
load up TF graph
wait until pickled state arrives
while not terminated:
Process 2 - Blender:
run simulation until next action required
pickle state
wait until pickled action arrives
Process 1 - TensorFlow:
unpickle state
feedforward state through neural net, find action
pickle action
wait until next pickled state arrives
Process 2 - Blender:
unpickle action
take action
This approach worked well for me, but I imagine there are more elegant low level solutions. I'd be curious to hear of any other possible approaches that achieve the same goal.
I did this to install tensorflow with the python version that comes bundled with blender.
Blender version: 2.78
Python version: 3.5.2
First of all you need to install pip for blender's python. For that I followed instructions mentioned in this link: https://blender.stackexchange.com/questions/56011/how-to-use-pip-with-blenders-bundled-python. (What I did was drag the python3.5m icon into the terminal and then further type the command '~/get-pip.py').
Once you have pip installed you are all set up to install 3rd party modules and use them with blender.
Navigate to the bin folder insider '/home/path to blender/2.78/' directory. To install tensorflow, drag the python3.5m icon into terminal, then drag pip3 icon into terminal and give the command install tensorflow.
I got an error mentioning module lib2to3 not found. Even installing the module didn't help as I got the message that no such module exists. Fortunately, I have python 3.5.2 installed on my machine. So navigate to /usr/lib/python3.5 and copy the lib2to3 folder from there to /home/user/blender/2.78/python/lib/python3.5 and then again run the installation command for tensorflow. The installation should complete without any errors. Make sure that you test the installation by importing tensorflow.
I was following the tutorial to create an Alexa app using Python:
Python Alexa Tutorial
I was able to successfully follow all the steps and get the app to work.I now want to modify the python code and use external libraries such as import requests
or any other libraries that I install using pip. How would I setup my lambda function to include any pip packages that I install locally on my machine?
As it is described in the Amazon official documentation link here It is as simple as just creating a zip of all the folder contents after installing the required packages in your folder where you have your python lambda code.
As Vineeth pointed above in his comment, The very first step in moving from an inline code editor to a zip file upload approach is to change your lambda function handler name under configuration settings to include the python script file name that holds the lambda handler.
lambda_handler => {your-python-script-file-name}.lambda_handler.
Other solutions like python-lambda and lambda-uploader help with simplifying the process of uploading and the most importantly LOCAL TESTING. These will save a lot of time in development.
The official documentation is pretty good. In a nutshell, you need to create a zip file of a directory containing both the code of your lambda function and all external libraries you use at the top level.
You can simulate that by deactivating your virtualenv, copying all your required libraries into the working directory (which is always in sys.path if you invoke a script on the command line), and checking whether your script still works.
You may want to look into using frameworks such as zappa which will handle packaging up and deploying the lambda function for you.
You can use that in conjunction with flask-ask to have an easier time making Alexa skills. There's even a video tutorial of this (from the zappa readme) here
To solve this particular problem we're using a library called juniper. In a nutshell, all you need to do is create a very simple manifest file that looks like:
functions:
# Name the zip file you want juni to create
router:
# Where are your dependencies located?
requirements: ./src/requirements.txt.
# Your source code.
include:
- ./src/lambda_function.py
From this manifest file, calling juni build will create the zip file artifact for you. The file will include all the dependencies you specify in the requirements.txt.
The command will create this file ./dist/router.zip. We're using that file in conjunction with a sam template. However, you can then use that zip and upload it to the console, or through the awscli.
Echoing #d3ming's answer, a framework is a good way to go at this point. Creating the deployment package manually isn't impossible, but you'll need to be uploading your packages' compiled code, and if you're compiling that code on a non-linux system, the chance of running into issues with differences between your system and the Lambda function's deployed environment are high.
You can then work around that by compiling your code on a linux machine or Docker container.. but between all that complexity you can get much more from adopting a framework.
Serverless is well adopted and has support for custom python packages. It even integrates with Docker to compile your python dependencies and build the deployment package for you.
If you're looking for a full tutorial on this, I wrote one for Python Lambda functions here.
Amazon created a repository that deals with your situation:
https://github.com/awsdocs/aws-lambda-developer-guide/tree/master/sample-apps/blank-python
The blank app is an example on how to push a lambda function that depends on requirements, with the bonus that being made by Amazon.
Everything you need to do is to follow the step by step, and update the repository based on your needs.
For some lambda POCs and fast lambda prototyping you can include and use the following function _install_packages, you can place a call to it before lambda handling function (for lambda init time package installation, if your deps need less than 10 seconds to install) or place the call at the beginning of the lambda handler (this will call the function exactly once at the first lambda event). Given pip install options included, packages to be installed must provide binary installable versions for manylinux.
_installed = False
def _install_packages(*packages):
global _installed
if not _installed:
import os
import sys
import time
_started = time.time()
os.system("mkdir -p /tmp/packages")
_packages = " ".join(f"'{p}'" for p in packages)
print("INSTALLED:")
os.system(
f"{sys.executable} -m pip freeze --no-cache-dir")
print("INSTALLING:")
os.system(
f"{sys.executable} -m pip install "
f"--no-cache-dir --target /tmp/packages "
f"--only-binary :all: --no-color "
f"--no-warn-script-location {_packages}")
sys.path.insert(0, "/tmp/packages")
_installed = True
_ended = time.time()
print(f"package installation took: {_ended - _started:.2f} sec")
# usage example before lambda handler
_install_packages("pymssql", "requests", "pillow")
def lambda_handler(event, context):
pass # lambda code
# usage example from within the lambda handler
def lambda_handler(event, context):
_install_packages("pymssql", "requests", "pillow")
pass # lambda code
Given examples install python packages: pymssql, requests and pillow.
An example lambda that installs requests and then calls ifconfig.me to obtain it's egress IP address.
import json
_installed = False
def _install_packages(*packages):
global _installed
if not _installed:
import os
import sys
import time
_started = time.time()
os.system("mkdir -p /tmp/packages")
_packages = " ".join(f"'{p}'" for p in packages)
print("INSTALLED:")
os.system(
f"{sys.executable} -m pip freeze --no-cache-dir")
print("INSTALLING:")
os.system(
f"{sys.executable} -m pip install "
f"--no-cache-dir --target /tmp/packages "
f"--only-binary :all: --no-color "
f"--no-warn-script-location {_packages}")
sys.path.insert(0, "/tmp/packages")
_installed = True
_ended = time.time()
print(f"package installation took: {_ended - _started:.2f} sec")
# usage example before lambda handler
_install_packages("requests")
def lambda_handler(event, context):
import requests
return {
'statusCode': 200,
'body': json.dumps(requests.get('http://ifconfig.me').content.decode())
}
Given single quote escaping is considered when building pip's command line, you can specify a version in a package spec like this pillow<9, the former will install most recent 8.X.X version of pillow.
I too struggled for a while with this. The after deep diving into aws resources I got to know the lambda function on aws runs locally on a a linux. And it's very important to have the the python package version which matches with the linux version.
You may find more information on this on :
https://aws.amazon.com/lambda/faqs/
Follow the steps to download the version.
1. Find the .whl image of the package from pypi and download it on you local.
2. Zip the packages and add them as layers in aws lambda
3. Add the layer to the lambda function.
Note: Please make sure that version you're trying to install python package matches the linux os on which the aws lambda performs computes tasks.
References :
https://pypi.org/project/Pandas3/#files
A lot of python libraries can be imported via Layers here: https://github.com/keithrozario/Klayers, or your can use a framework like serverless that has plugins to package packages directly into your artifact.
I am currently working on a project to embed a Python interpreter in my website--I would like users to be able to execute their own Python code on the server-side in a secure Python sandbox. I have been using PyPy's sandboxing features and have set my virtual tmp/ directory (which corresponds to the real directory that the sandbox can read from) to my server-side files. So ideally, users will be able to run their own code on the server and use whatever libraries I make available to them (with no ability to write to any of these files/make system calls/do anything in general that would mess up my files).
The issue is that within my server-side code I use standard Python libraries as well as others I have installed separately (e.g. "import datetime", "import pandas", etc.). While the sandbox can read my server-side code it ends up failing whenever I try to import additional libraries within this code. I have thought of setting my virtual tmp/ directory to include my server-side files + the standard Python libraries + separately installed libraries such as PANDAS. However, when I tried setting it to just the Python libraries it ended up not working, not to mention that this solution seems pretty inelegant.
Any thoughts on how to go about fixing this? I appreciate whatever advice anyone may have. Thanks!
EDIT: I think part of the answer lies in the following portion of pypy_interact.py (the full source code can be found here: https://github.com/ojii/sandlib/blob/master/sandlib/pypy_interact.py):
def build_virtual_root(self):
# build a virtual file system:
# * can access its own executable
# * can access the pure Python libraries
# * can access the temporary usession directory as /tmp
exclude = ['.pyc', '.pyo']
if self.tmpdir is None:
tmpdirnode = Dir({})
else:
tmpdirnode = RealDir(self.tmpdir, exclude=exclude)
libroot = str(LIB_ROOT)
return Dir({
'bin': Dir({
'pypy-c': RealFile(self.executable),
'lib-python': RealDir(os.path.join(libroot, 'lib-python'),
exclude=exclude),
'lib_pypy': RealDir(os.path.join(libroot, 'lib_pypy'),
exclude=exclude),
}),
'tmp': tmpdirnode,
})
I am relatively new to Python and can't seem to figure out how to use this method properly. Any suggestions would be greatly appreciated.
I'm writing a simple IronWorker in Python to do some work with the AWS API.
To do so I want to use the boto library which is distributed via PyPi repository. The boto library is not installed by default in the IronWorker runtime environment.
How can I bundle the boto library dependancy with my IronWorker code?
Ideally I'm hoping I can use something like the gem dependancy bundling available for Ruby IronWorkers - i.e in myRuby.worker specify
gemfile '../Gemfile', 'common', 'worker' # merges gems from common and worker groups
In the Python Loggly sample, I see that the hoover library is used:
#here we have to include hoover library with worker.
hoover_dir = os.path.dirname(hoover.__file__)
shutil.copytree(hoover_dir, worker_dir + '/loggly') #copy it to worker directory
However, I can't see where/how you specify which hoover library version you want, or where to download it from.
What is the official/correct way to use 3rd party libraries in Python IronWorkers?
Newer iron_worker version has native support of pip command.
So, you need:
runtime "python"
exec "something.py"
pip "boto"
pip "someotherpip"
full_remote_build true
[edit]We've worked on our toolset a bit since this answer was written and accepted. The answer from my colleague below is the recommended course moving forward.[/edit]
I wrote the Python client library for IronWorker. I'm also employed by Iron.io.
If you're using the Python client library, the easiest (and recommended) way to do this is to just copy over the library's installed folder, and include it when uploading the package. That's what the Python Loggly sample is doing above. As you said, that doesn't specify a version or where to download the library from, because it doesn't care. It just takes the one installed on your system and uses it. Whatever you get when you enter "import boto" on your local machine is what would be uploaded.
The other option is using our CLI to upload your worker, with a .worker file.
To do this, here's what you'd need to do:
Create a botoworker.worker file:
runtime "binary"
build 'pip install --install-option="--prefix=`pwd`/pips" boto'
file 'botoworker.py'
exec "botoworker.sh"
That second line is the pip command that will be run to install the dependency. You can modify it like you would any pip command run from the command line. It's going to execute that command on the worker during the "build" phase, so it's only executed once instead of every time you run a task.
The third line should be changed to the Python file you want to run--it's your Python worker file. Here's the one we used to test this:
import boto
If you save that as botoworker.py, the above should work without any modification. :)
The fourth line is a shell script that's going to actually run your worker. I've included the one we used below. Just save it as botoworker.sh, and you won't have to worry about modifying the .worker file above.
PYTHONPATH="$HOME/pips/lib/python2.7/site-packages:$PYTHONPATH" python botoworker.py "$#"
You'll notice it refers to your Python file--if you don't name your Python file botoworker.py, remember to change it here, too. All this does is set your PYTHONPATH to include the installed library, and then runs your Python file.
To upload this, just make sure you have the CLI installed (gem install iron_worker_ng, making sure your Ruby version is 1.9.3 or higher) and then run "iron_worker upload botoworker" in your shell, from the same directory your botoworker.worker file is in.
Hope this helps!