I am looking for method to check which jenkins plugins are not used.
So far I found that I can look for tags in config.xml file with attribute plugin then compare them with the ones listed in plugins directory.
But that does not give me complete list. Still some are not there like role-strategy.
I use python code like below
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import glob
from lxml import etree as ET
from collections import defaultdict
def find(name, path):
return glob.glob(path+'/jobs/*/'+name)
def get_plugin_list(path):
return [x[:-4].split('/')[-1] for x in glob.glob(path+'/plugins/*.jpi')]
if __name__ == "__main__":
jobs_dict = defaultdict(list)
plugins_all = set(get_plugin_list('/home/user/.jenkins')
for config in find('config.xml', '/home/user/.jenkins'):
with open(config) as f:
tree = ET.XML(f.read())
plugins = tree.xpath("/project//#plugin")
job = config.split('/')[-2]
for p in plugins:
jobs_dict[p].append(job)
with open('/home/user/.jenkins/config.xml') as f:
tree = ET.XML(f.read())
plugins_config = tree.xpath("/hudson//#plugin")
plugins_used = set([x.split('#')[0] for x in jobs_dict.keys()+plugins_config])
print "######## All plugins\n", '\n'.join(plugins_all)
print "######## Used plugins\n", '\n'.join(plugins_used)
print "######## Unused plugins\n", '\n'.join(plugins_all - plugins_used)
There's a Jenkins plugin precisely for this matter:
Plugin Usage
Thanks to this wonderful plugin I found many redundant plugins to remove in the Plugin Manager (you will be able to remove plugins that has no dependencies).
Here's how it looks - the plugin interface has a link on Jenkins sidebar. It lists all the plugins which any of an existing job uses (pressing on the expand button to see jobs names):
Some plugins only affect the Jenkins system configuration, rather than individual jobs; you should be able to find those by changing the find method in your code to include /home/user/.jenkins/config.xml.
Many plugins have their own configuration files in $JENKINS_HOME, e.g. $JENKINS_HOME/org.jenkinsci.plugins.p4.PerforceScm.xml. I haven't looked into this, but your might be able to find some extra plugin usage by searching the config.xml files for the plugin name (e.g. PerforceSCM) rather than the term "plugin".
Also, if you only want to search for jobs that are enabled, you can filter out jobs with "<disabled>true</disabled>" in their config.xml.
If you need to check the list of installed but disabled plugins you can check Installed tab in plugin manager.
Unchecked are the ones that are disabled.
Related
I realise this question as been asked before (What's the best practice using a settings file in Python?) but seeing as this was asked 7 years ago, I feel it is valid to discuss again seeing as how technologies have evolved.
I have a python project that requires different configurations to be used based on the value of an environment variable. Since making use of the environment variable to choose a config file is simple enough, my question is as follows:
What format is seen as the best practice in the software industry for setting up a configuration file in python, when multiple configurations are needed based on the environment?
I realise that python comes with a ConfigParser module but I was wondering if it might be better to use a format such as YAML or JSON because of there raise in popularity due to their ease of use across languages. Which format is seen as easier to maintain when you have multiple configurations?
If you really want to use an environment-based YAML configuration, you could do so like this:
config.py
import yaml
import os
config = None
filename = getenv('env', 'default').lower()
script_dir = os.path.dirname(__file__)
abs_file_path = os.path.join(script_dir, filename)
with open(abs_file_path, 'r') as stream:
try:
config = yaml.load(stream)
except yaml.YAMLError as exc:
print(exc)
I think looking at the standard configuration for a Python Django settings module is a good example of this, since the Python Django web framework is extremely popular for commercial projects and therefore is representative of the software industry.
It doesn't get too fancy with JSON or YAML config files - It simply uses a python module called settings.py that can be imported into any other module that needs to access the settings. Environment variable based settings are also defined there. Here is a link to an example settings.py file for Django on Github:
https://github.com/deis/example-python-django/blob/master/helloworld/settings.py
This is really late to the party, but this is what I use and I'm pleased with it (if you're open to a pure Python solution). I like it because my configurations can be set automatically based on where this is deployed using environment variables. I haven't been using this that long so if someone sees an issue, I'm all ears.
Structure:
|--settings
|--__init__.py
|--config.py
config.py
class Common(object):
XYZ_API_KEY = 'AJSKDF234328942FJKDJ32'
XYZ_API_SECRET = 'KDJFKJ234df234fFW3424##ewrFEWF'
class Local(Common):
DB_URI = 'local/db/uri'
DEBUG = True
class Production(Common):
DB_URI = 'remote/db/uri'
DEBUG = False
class Staging(Production):
DEBUG = True
__init__.py
from settings.config import Local, Production, Staging
import os
config_space = os.getenv('CONFIG_SPACE', None)
if config_space:
if config_space == 'LOCAL':
auto_config = Local
elif config_space == 'STAGING':
auto_config = Staging
elif config_space == 'PRODUCTION':
auto_config = Production
else:
auto_config = None
raise EnvironmentError(f'CONFIG_SPACE is unexpected value: {config_space}')
else:
raise EnvironmentError('CONFIG_SPACE environment variable is not set!')
If my environment variable is set in each place where my app exists, I can bring this into my modules as needed:
from settings import auto_config as cfg
That really depends on your requirements, rather than the format's popularity. For instance, if you just need simple key-value pairs, an INI file would be more than enough. As soon as you need complex structures (e.g., arrays or dictionaries), I'd go for JSON or YAML. JSON simply stores data (it's more intended for automated data flow between systems), while YAML is better for human-generated (or maintained, or read) files, as it has comments, you can reference values elsewhere in the file... And on top of that, if you want robustness, flexibility, and means to check the correct structure of the file (but don't care much about the manual edition of the data), I'd go for XML.
I recommend giving trapdoor a try for turn-key configuration (disclaimer: I'm the author of trapdoor).
I also like to take advantage of the fact that you do not have to compile Python source and use plain Python files for configuration. But in the real world you may have multiple environments, each requires a different configuration, and you may also want to read some (mostly sensitive) information from env vars or files that are not in source control (to prevent committing those by mistake).
That's why I wrote this library: https://github.com/davidohana/kofiko,
which let you use plain Python files for configuration, but is also able to override those config settings from .ini or env-vars, and also support customization for different environments.
Blog post about it: https://medium.com/swlh/code-first-configuration-approach-for-python-f975469433b9
I have a scrapy project that writes the data it scrapes to a database. It was based on this great tutorial: http://newcoder.io/scrape/part-3/
I have hit an issue now that I am trying to write some integration tests for the project. I am following the guidelines here: Scrapy Unit Testing
It's not clear to me how best to pass in the appropriate database settings. I'd like the tests to use their own database that I can ensure is in a known state before the tests start running.
So just import settings won't do the trick as, if the project is being run in test mode then it needs to use a different settings file.
I am familiar with Ruby on Rails projects where you specify a RAILS_ENV environment variable, and based on this environment variable, the framework will use settings from different files. Is there a similar concept that can apply when testing scrapy projects? Or is there a more pythonic alternative approach?
In the end I edited the settings.py file to support using an environment variable to determine which additional files to get the settings from, like this:
from importlib import import_module
import logging
import os
SCRAPY_ENV=os.environ.get('SCRAPY_ENV',None)
if SCRAPY_ENV == None:
raise ValueError("Must set SCRAPY_ENV environment var")
# Load if file exists; incorporate any names started with an
# uppercase letter into globals()
def load_extra_settings(fname):
if not os.path.isfile("config/%s.py" % fname):
logger = logging.getLogger(__name__)
logger.warning("Couldn't find %s, skipping" % fname)
return
mdl=import_module("config.%s" % fname)
names = [x for x in mdl.__dict__ if x[0].isupper()]
globals().update({k: getattr(mdl,k) for k in names})
load_extra_settings("secrets")
load_extra_settings("secrets_%s" % SCRAPY_ENV)
load_extra_settings("settings_%s" % SCRAPY_ENV)
I made an example github repo showing how this worked: https://github.com/alanbuxton/scrapy_local_settings
Keen to find out if there is a neater way of doing it.
I'm using reStructuredText for my blog/website and I want to add a global include file. I have access to and am happy to change the settings file I'm using to generate the html output, I just can't figure out the syntax for either:
adding a default include file to the parser
defining directive/inline-roles, etc in python with docutils in python
I tried reading the source code and the documentation and just find it a bit hard to follow. I'm hoping that I just missed something super-obvious, but I'd like to do something like the following (the first part is just what is already there -- you can see the rest of the file in the jekyll-rst plugin source (links right to it)
import sys
from docutils.core import publish_parts
from optparse import OptionParser
from docutils.frontend import OptionParser as DocutilsOptionParser
from docutils.parsers.rst import Parser
# sets up a writer that is then called to parse rst pages repeatedly
def transform(writer=None, part=None):
p = OptionParser(add_help_option=False)
# Collect all the command line options
docutils_parser = DocutilsOptionParser(components=(writer, Parser()))
for group in docutils_parser.option_groups:
p.add_option_group(group.title, None).add_options(group.option_list)
p.add_option('--part', default=part)
opts, args = p.parse_args()
# ... more settings, etc
# then I just tell the parser/writer to process specified file X.rst every time
# (or alternately a python file defining more roles...but nicer if in rst)
Is there a simple way to do this? It'd be great to define a file defaults.rst and have that load each time.
EDIT: Here are some examples of what I'd like to be able to globally include (custom directives would be nice too, but I'd probably write those in code)
.. role:: raw-html(raw)
:format: html
.. |common-substitution| replace:: apples and orange
.. |another common substitution| replace:: etc
I'm not quite sure if I understand the question. Do you want to define a number of, for example, substitutions in some file and have these available in all your other reStructuredText files, or do you want to include some common HTML in your output files? Can you clarify your question?
If it is the former that you want to do you can use the include directive, as I outline in this answer.
Alternatively, if you want some common HTML included in the generated output, try copying and editing the template.txt file which is include in the module path/to/docutils/writers/html4css1/. You can include arbitrary HTML elements in this file and modify the layout of the HTML generated by Docutils. Neither of these methods require you to modify the Docuitls source code, which is always an advantage.
Edit: I don't think it is possible to set a flag to set an include file using Docuitls. However, if you can use Sphinx, which is based on Docuitls but has a load of extensions, then this package has a setting rst_prolog which does exactly what you need (see this answer). rst_prolog is:
A string of reStructuredText that will be included at the beginning of every source file that is read.
I needed the exact same thing: A way to have some global reStructuredText files being automatically imported into every reStructuredText article without having to specify them each time by hand.
One solution to this problem is the following plugin:
import os
from pelican import signals
from pelican.readers import RstReader
class RstReaderWrapper(RstReader):
enabled = RstReader.enabled
file_extensions = ['rst']
class FileInput(RstReader.FileInput):
def __init__(self, *args, **kwargs):
RstReader.FileInput_.__init__(self, *args, **kwargs)
self.source = RstReaderWrapper.SourceWrapper(self.source)
# Hook into RstReader
RstReader.FileInput_ = RstReader.FileInput
RstReader.FileInput = FileInput
class SourceWrapper():
"""
Mimics and wraps the result of a call to `open`
"""
content_to_prepend = None
def __init__(self, source):
self.source = source
def read(self):
content = self.source.read()
if self.content_to_prepend is not None:
content = "{}\n{}".format(self.content_to_prepend, content)
return content
def close(self):
self.source.close()
def process_settings(pelicanobj):
include_files = pelicanobj.settings.get('RST_GLOBAL_INCLUDES', []) or []
base_path = pelicanobj.settings.get('PATH', ".")
def read(fn):
with open(os.path.join(base_path, fn), 'r') as res:
content = res.read()
return ".. INLCUSION FROM {}\n{}\n".format(fn, content)
inclusion = "".join(map(read, include_files)) if include_files else None
RstReaderWrapper.SourceWrapper.content_to_prepend = inclusion
def register():
signals.initialized.connect(process_settings)
Usage in short:
Create a plugin from the above code (best clone the repository from GitHub)
Import the plugin (adapt PLUGINS in pelicanconf.py)
Define the list of RST files (relative paths to project root) to include by setting the variable RST_GLOBAL_INCLUDES in pelicanconf.py
Please note that pelican and docutils are both not designed to allow this. Neither a signal is provided which provides a clean access to the raw contents of a source file before processing begins, nor is there a possibility to intercept the framework reading the file in "a normal way" (like subclassing, changing hardcoded configuration, etc).
This plugin subclasses the internal class FileInput of RstReader and sets the class reference of RstReader.FileInput to the subclass. Also python file objects are emulated through SourceWrapper.
Nevertheless, this approach works for me and is not cumbersome in the daily workflow.
I know this question is from 2012 but I think this answer can still be helpful to others.
I am relatively new to python (already did some 1h scripts like a little webserver or a local network chat) and want to program a plugin manager in it.
My idea is, that there is an interface for plugins, that has the following features:
getDependencies -> all dependencies of the plugin to other plugins
getFunctions -> all functions that this plugin introduces
initialize -> a function that is called when loading the plugin
(I could imagine to have a topological sorting algorithm on the dependencies to decide the order in which the plugins are initialized.)
I would like to implement multithreading, meaning that each plugin runs in its own thread, that has a working queue of function-calls that will be executed serially. When a plugin calls the function of another plugin it calls the manager who will in turn insert the function-call into the queue of the other plugin.
Further the manager should provide some kind of event system in which the plugins can register their own events and become listeners to the events of others.
Also I want to be able to reload a plugin if the code has changed or its thread crashed, without shutting down the manager/application. I already read How do I unload (reload) a Python module? in conjunction with this.
To make it clear once more: The manager should not provide any other functionality than supporting its plugins with a common communication interface to each other, the ability to run side by side (in a multithreaded manner without requiring the plugins to be aware of this) and restoring updated/crashed plugins.
So my questions are: Is it possible to do this in python? And if yes are there design mistakes in this rough sketch? I would appreciate any good advice on this.
Other "literature":
Implementing a Plugin System in Python
At the most basic level, first of all, you want to provide a basic Plugin class which is a base for all plugins written for your application.
Next we need to import them all.
class PluginLoader():
def __init__(self, path):
self.path = path
def __iter__(self):
for (dirpath, dirs, files) in os.walk(self.path):
if not dirpath in sys.path:
sys.path.insert(0, dirpath)
for file in files:
(name, ext) = os.path.splitext(file)
if ext == os.extsep + "py":
__import__(name, None, None, [''])
for plugin in Plugin.__subclasses__():
yield plugin
In Python 2.7 or 3.1+, instead of __import__(name, None, None, ['']), consider:
import importlib # just once
importlib.import_module(name)
This loads every plugin file and gives us all plugins. You would then select your plugins as you saw fit, and then use them:
from multiprocessing import Process, Pipe
plugins = {}
for plugin in PluginLoader("plugins"):
... #select plugin(s)
if selected:
plugins[plugin.__name__], child = Pipe()
p = Process(target=plugin, args=(child,))
p.start()
...
for plugin in plugins.values():
plugin.put("EventHappened")
...
for plugin in plugins.values():
event = plugin.get(False)
if event:
... #handle event
This is just what comes to mind at first. Obviously much more would be needed for flesh this out, but it should be a good basis to work from.
Check yapsy plugin https://github.com/tibonihoo/yapsy. This should work for you
We maintain a fairly large documentation using Sphinx in SVN.
As part of the generated output we would like to include the release notes of related Python modules as primary content (not as hyperlink!). The release notes of the external modules are also maintained in SVN. Is there some Sphinx-ish way to pull in the parts of the documentation from other (SVN) sources? Ok, using SVN externals is a way to solve the problem but perhaps not the smartest way...any better options?
The two options I can think of are:
Add an svn:externals link to the remote project (which you already know about).
Extend Sphinx with a custom directive to include files from remote subversion repositories.
I'm no expert on Sphinx internals but was able to cobble together a quick extension which embeds files from a remote subversion repository.
The extension adds an svninclude directive which takes 1 argument, the url of the repository where your docs are located. It checks this repository out into a temp directory _svncache located in the project root, and then proceeds to read the contents of each file and insert them into the parser's state machine.
Here is the code for the svninclude.py extension. It is oversimplified and has no error checking at the moment. If you plan to implement this let me know and I can provide some additional tips if you get stuck:
import os, re, subprocess, sys
from docutils import nodes, statemachine
from docutils.parsers.rst import directives
from sphinx.util.compat import Directive, directive_dwim
class SvnInclude(Directive):
has_content = True
required_arguments = 1
optional_arguments = 0
final_argument_whitespace = False
def _setup_repo(self, repo):
env = self.state.document.settings.env
path = os.path.normpath(env.doc2path(env.docname, base=None))
cache = os.path.join(os.path.dirname(path), '_svncache')
root = os.path.join(cache, re.sub('[\W\-]+', '_', repo))
if not os.path.exists(root):
os.makedirs(root)
subprocess.call(['svn', 'co', repo, root])
return root
def run(self):
root = self._setup_repo(self.arguments[0])
for path in self.content:
data = open(os.path.join(root, path), 'rb').read()
lines = statemachine.string2lines(data)
self.state_machine.insert_input(lines, path)
return []
def setup(app):
app.add_directive('svninclude', directive_dwim(SvnInclude))
Here is an example of the markup you'd include in your index.rst (or other file):
.. svninclude:: http://svn.domain.com/svn/project
one.rst
doc/two.rst
Where the paths one.rst and doc/two.rst are relative to the subversion url, for example http://svn.domain.com/svn/project/one.rst.
You'd of course want to package up the svninclude.py and get it installed in your Python path. Here's what I did to test it:
Added 'svninclude' to the extensions list in source/conf.py.
Placed svninclude.py in the project root.
Then ran:
% PYTHONPATH=. sphinx-build -b html ./source ./build