I have a python script which basically runs the following three commands:
kubectl apply -f class.yaml
kubectl apply -f rbac.yaml
kubectl apply -f deployment-arm.yaml
I want to use the kubernetes-client written in python to replace it. My current code, loads the there yaml files (using pyyaml), edits them a bit, inserts into a file and use the command line kubectl to execute those three commands. Some of the code:
# load files, edit them and dump into new files, part ...
result = run(['kubectl', 'apply', '-f', class_file_path])
# status check part ...
result = run(['kubectl', 'apply', '-f', rbac_file_path])
# status check part ...
result = run(['kubectl', 'apply', '-f', deployment_file_path])
# status check part ...
What I want to do: Replace those three commands with the python kubernetes-client. Reading the docs and seeing the topic, I came across with the create_namespaced_deployment method which I think I need to use for the deployment_file_path file. But I can't seem to figure out what I need to do with the two other files.
Assuming that I already loaded the three yaml files (using pyyaml) and edited them (without dumping into new files) and now you have free yaml dicts deployment_dict, class_dict, and rbac_dict, How can I use the client to execute the three above methods?
EDIT: BTW if it's not possible to pass the three dicts, I could just dump them into files again but I want to use the python client instead of the kubectl. How to do it?
There is a separate function for every object and action:
from kubernetes import client, config
import yaml
body = yaml.safe_load("my_deployment.yml")
config.load_kube_config()
apps_api = client.AppsV1Api()
apps_api.create_namespaced_deployment(body=body, namespace="default")
apps_api.replace_namespaced_deployment(body=body, namespace="default")
apps_api.patch_namespaced_deployment(body=body, namespace="default")
apps_api.delete_namespaced_deployment(body=body, namespace="default")
body = yaml.safe_load("my_cluster_role.yml")
rbac_api = client.RbacAuthorizationV1Api()
rbac_api.create_cluster_role(body=body)
rbac_api.patch_cluster_role(body=body)
rbac_api.replace_cluster_role(body=body)
rbac_api.delete_cluster_role(body=body)
# And so on
When you use kubectl apply you don't care if the object already exists, what API to use, which method to apply, etc. With the client library, as you can see from the example above, you need to:
Load kube-config.
Select the right API to use.
Select the method you want to use. Note that create_something will not work if that something already exists.
I recommend you to go through the examples that the library provides, they really are great to learn the thing.
Related
The picture above shows the list of all kubernetes pods I need to save to a text file (or multiple text files).
I need a command which:
stores multiple pod logs into text files (or on single text file) - so far I have this command which stores one pod into one text file but this is not enough since I will have to spell out each pod name individually for every pod:
$ kubectl logs ipt-prodcat-db-kp-kkng2 -n ho-it-sst4-i-ie-enf > latest.txt
I then need the command to send these files into a python script where it will check for various strings - so far this works but if this could be included with the above command then that would be extremely useful:
python CheckLogs.py latest.txt latest2.txt
Is it possible to do either (1) or both (1) and (2) in a single command?
The simplest solution is to create a shell script that does exactly what you are looking for:
#!/bin/sh
FILE="text1.txt"
for p in $(kubectl get pods -o jsonpath="{.items[*].metadata.name}"); do
kubectl logs $p >> $FILE
done
With this script you will get the logs of all the pods in your namespace in a FILE.
You can even add python CheckLogs.py latest.txt
There are various tools that could help here. Some of these are commonly available, and some of these are shortcuts that I create my own scripts for.
xargs: This is used to run multiple command lines in various combinations, based on the input. For instance, if you piped text output containing three lines, you could potentially execute three commands using the content of those three lines. There are many possible variations
arg1: This is a shortcut that I wrote that simply takes stdin and produces the first argument. The simplest form of this would just be "awk '{print $1}'", but I designed mine to take optional parameters, for instance, to override the argument number, separator, and to take a filename instead. I often use "-i{}" to specify a substitution marker for the value.
skipfirstline: Another shortcut I wrote, that simply takes some multiline text input and omits the first line. It is just "sed -n '1!p'".
head/tail: These print some of the first or last lines of stdin. Interesting forms of this take negative numbers. Read the man page and experiment.
sed: Often a part of my pipelines, for making inline replacements of text.
I am trying to create a simple automation to replace my config map content in OpenShift from the current one to an edited yaml file, I have tried many oc commands and failed, I was wondering if any one had an idea of how to do this.
Just to make u understand:
I'm usingoc get congifmap <configmap name> to get the current configmap from my project,
Then I am using python to make my changes to the configmap data.
then I want to change the current config map to the new edited one.
I tried edit, apply, change but they all failed.
Would appreciate the help :)
There is an"oc"command for such use case:oc patch
With"oc patch"you can edit, replace, add, remove parts of any OCP object
If you google "oc patch" you'll find many example on the net
Official OCP v4.7 oc patch doc
Tons of examples here
OC patch "man" page
Other examples
You just need to work with "inputs" and "outputs".
Imagine a lighttpd.conf:
server.modules = (
"mod_scgi",
"mod_compress",
"mod_accesslog"
)
oc create cm lighttpd --from-file lighttpd.conf
So, as a example, let's change the mod_scgi to mod_fastcgi. So I wrote this script:
import fileinput
for line in fileinput.input():
if 'mod_scgi' in line:
print(line.replace('scgi', 'fastcgi').rstrip())
else:
print(line.rstrip())
So, to change the configMap, update it's value and apply again:
oc get cm -o yaml | python modify.py | oc apply -f -
get -o yaml prints all information on screen
output goes to the script modify.py
modify.py change and print the lines as they are read from standard input
output goes to oc
oc apply -f - reads from standard input and apply
I have a python script that successfully sends SOAP to insert a record into a system. The values are static in the test. I need to make the value dynamic/argument that is passed through the command line or other stored value.
execute: python myscript.py
<d4p1:Address>MainStreet</d4p1:Address> ....this works to add hard coded "MainStreet"
execute: python myscript.py MainStreet
...this is now trying to pass the argument MainStreet
<d4p1:Address>sys.argv[1]</d4p1:Address> ....this does not work
It saves the literal text address as "sys.argv[1]" ... I have imported sys ..I have tried %, {}, etc from web searches, what syntax am I missing??
You need to read a little about how to create strings in Python, below is how it could look like in your code. Sorry it's hard to say more without seeing your actual code. And you actually shouldn't create XMLs like that, you should use for instance xml module from standard library.
test = "<d4p1:Address>" + sys.argv[1] + "</d4p1:Address>"
I have two python scripts, scriptA and scriptB, which run on Unix systems. scriptA takes 20s to run and generates a number X. scriptB needs X when it is run and takes around 500ms. I need to run scriptB everyday but scriptA only once every month. So I don't want to run scriptA from scriptB. I also don't want to manually edit scriptB each time I run scriptA. I thought of updating a file through scriptA but I'm not sure where such a file can be placed ideally so that scriptB can read it later; independent of the location of these two scripts. What is the best way of storing this value X in an Unix system so that it can be used later by scriptB?
Many programs in Linux/Unix keep config in /etc/ and use subfolder in /var/ for other files.
But probably you could need root privilages.
If you run script in your home folder than you could create file ~/.scripB.rc or folder ~/.scriptB/ or ~/.config/scriptB/
See also on wikipedia Filesystem Hierarchy Standard
It sounds like you want to serialize ScriptA's results, save it in a file or database somewhere, then have ScriptB read those results (possibly also modifying the file or updating the database entry to indicate that those results have now been processed).
To make that work you need for ScriptA and ScriptB to agree on the location and format of the data ... and you might want to implement some sort of locking to ensure that ScriptB doesn't end up with corrupted inputs if it happens to be run at the same time that ScriptA is writing or updating the data (and, conversely, that ScriptA doesn't corrupt the data store by writing thereto while ScriptB is accessing it).
Of course ScriptA and ScriptB could each have a filename or other data location hard-coded into their sources. However, that would violation the DRY Principle. So you might want them to share a configuration file. (Of course the configuration filename is also repeated in these sources ... or at least the import of the common bit of configuration code ... but the latter still ensures that an installation/configuration detail (location and, possibly, format, of the data store) is decoupled from the source code. Thus it can be changed (in the shared config) without affecting the rest of the code for either script.
As for precisely which type of file and serialization to use ... that's a different question.
These days, as strange as it may sound, I'd would suggest using SQLite3. It may seem like over-kill to use an SQL "database" for simply storing a single value. However, SQLite3 is included in the Python standard libraries, and it only needs a filename for configuration.
You could also use a pickle or JSON or even YAML (which would require a third party module) ... or even just text or some binary representation using something like struct. However, any of those will require that you parse your results and deal with any parsing or formatting errors. JSON would be the simplest option among these alternatives. Additionally you'd have to do your own file locking and handling if you wanted ScriptA and ScriptB (and, potentially, any other scripts you ever write for manipulating this particular data) to be robust against any chance of concurrent operations.
The advantage of SQLite3 is that it handles the parsing and decoding and the locking and concurrency for you. You create the table once (perhaps embedded in ScriptA as a rarely used "--initdb" option for occasions when you need to recreate the data store). Your code to read it might look as simple as:
#!/usr/bin/python
import sqlite3
db = sqlite3.connect('./foo.db')
cur = db.cursor()
results = cur.execute(SELECT value, MAX(date) FROM results').fetchone()[0]
... and writing a new value would look a bit like:
#!/usr/bin/python
# (Same import, db= and cur= from above)
with db:
cur.execute('INSERT INTO results (value) VALUES (?)', (myvalue,))
All of this assuming you had, at some time, initialized the data store (foo.db in this example) with something like:
#!/usr/bin/python
# (Same import, db= and cur= from above)
with db:
cur.execute('CREATE TABLE IF NOT EXISTS results (value INTEGER NOT NULL, date TIMESTAMP DEFAULT current_timestamp)')
(Actually you could just execute that command every time if you wanted your scripts to recovery silently from cleaning out the old data).
This might seem like more code than a JSON file-based approach. However, SQLite3 is providing ACID(transactional) semantics as well as abstracting away the serialization and deserialization.
Also note that I'm glossing over a few details. My example above are actually creating a whole table of results, with timestamps for when they were written to your datastore. These would accumulate over time and, if you were using this approach, you'd periodically want to clean up your "results" table with a command like:
#!/usr/bin/python
# (Same import, db= and cur= from above)
with db:
cur.execute('DELETE FROM results where date < ?', cur.execute('SELECT MAX(date) from results').fetchone())
Alternatively if you really never want to have access to your prior results that change from INSERT into UPDATE like so:
#!/usr/bin/python
# (Same import, db= and cur= from above)
with db:
cur.execute(cur.execute('UPDATE results SET value=(?)', (mynewvalue,))
(Also note that the (mynewvalue,) is a single element tuple. The DBAPI requires that our parameters be wrapped in tuples which is easy to forget when you first start using it with single parameters such as this).
Obviously if you took this UPDATE only approach you could drop the 'date' column from the 'results' table and all those references to MAX(data) from the queries.
I chose use the slightly more complex schema in my early examples because they allow your scripts to be a bit more robust with very little additional complexity. You could then do other error checking, detecting missing values where ScriptB finds that ScriptA hasn't been run as intended, for example).
Edit/run crontab -e:
# this will run every month on the 25th at 2am
0 2 25 * * python /path/to/scriptA.py > /dev/null
# this will run every day at 2:10 am
10 2 * * * python /path/to/scriptB.py > /dev/null
Create an external file for both scripts:
In scriptA:
>>> with open('/path/to/test_doc','w+') as f:
... f.write('1')
...
In scriptB:
>>> with open('/path/to/test_doc','r') as f:
... v = f.read()
...
>>> v
'1'
You can take a look at PyPubSub
It's a python package which provides a publish - subscribe Python API that facilitates event-based programming.
It'll give you an OS independent solution to your problem and only requires few additional lines of code in both A and B.
Also you don't need to handle messy files!
Assuming you are not running the two scripts at the same time, you can (pickle and) save the go between object anywhere so long as when you load and save the file you point to the same system path. For example:
import pickle # or import cPickle as pickle
# Create a python object like a dictionary, list, etc.
favorite_color = { "lion": "yellow", "kitty": "red" }
# Write to file ScriptA
f_myfile = open('C:\\My Documents\\My Favorite Folder\\myfile.pickle', 'wb')
pickle.dump(favorite_color, f_myfile)
f_myfile.close()
# Read from file ScriptB
f_myfile = open('C:\\My Documents\\My Favorite Folder\\myfile.pickle', 'rb')
favorite_color = pickle.load(f_myfile) # variables come out in the order you put them in
f_myfile.close()
I have a server/client socket pair in Python. The server receives specific commands, then prepares the response and send it to the client.
In this question, my concern is just about possible injections in the code: if it could be possible to ask the server doing something weird with the 2nd parameter -- if the control on the command contents is not sufficient to avoid undesired behaviour.
EDIT:
according to advices received
added parameter shell=True when calling check_output on windows. Should not be dangerous since the command is a plain 'dir'.
.
self.client, address = self.sock.accept()
...
cmd = bytes.decode(self.client.recv(4096))
ls: executes a system command but only reads the content of a directory.
if cmd == 'ls':
if self.linux:
output = subprocess.check_output(['ls', '-l'])
else:
output = subprocess.check_output('dir', shell=True)
self.client.send(output)
cd: just calls os.chdir.
elif cmd.startswith('cd '):
path = cmd.split(' ')[1].strip()
if not os.path.isdir(path):
self.client.send(b'is not path')
else:
os.chdir(path)
self.client.send( os.getcwd().encode() )
get: send the content of a file to the client.
elif cmd.startswith('get '):
file = cmd.split(' ')[1].strip()
if not os.path.isfile(file):
self.client.send(b'ERR: is not a file')
else:
try:
with open(file) as f: contents = f.read()
except IOError as er:
res = "ERR: " + er.strerror
self.client.send(res.encode())
continue
... (send the file contents)
Except in implementation details, I cannot see any possibilities of direct injection of arbitrary code because you do not use received parameters in the only commands you use (ls -l and dir).
But you may still have some security problems :
you locate commands through the path instead of using absolute locations. If somebody could change the path environment variable what could happen ... => I advice you to use directly os.listdir('.') which is portable and has less risks.
you seem to have no control on allowed files. If I correctly remember reading CON: or other special files on older Windows version gave weird results. And you should never give any access to sensible files, configuration, ...
you could have control on length of asked files to avoid users to try to break the server with abnormally long file names.
Typical issues in a client-server scenario are:
Tricking the server into running a command that is determined by the client. In the most obvious form this happens if the server allows the client to run commands (yes, stupid). However, this can also happen if the client can supply only command parameters but shell=True is used. E.g. using subprocess.check_output('dir %s' % dir, shell=True) with a client-supplied dir variable would be a security issue, dir could have a value like c:\ && deltree c:\windows (a second command has been added thanks to the flexibility of the shell's command line interpreter). A relatively rare variation of this attack is the client being able to influence environment variables like PATH to trick the server into running a different command than intended.
Using unexpected functionality of built-in programming language functions. For example, fopen() in PHP won't just open files but fetch URLs as well. This allows passing URLs to functionality expecting file names and playing all kinds of tricks with the server software. Fortunately, Python is a sane language - open() works on files and nothing else. Still, database commands for example can be problematic if the SQL query is generated dynamically using client-supplied information (SQL Injection).
Reading data outside the allowed area. Typical scenario is a server that is supposed to allow only reading files from a particular directory, yet by passing in ../../../etc/passwd as parameter you can read any file. Another typical scenario is a server that allows reading only files with a particular file extension (e.g. .png) but passing in something like passwords.txt\0harmless.png still allows reading files of other types.
Out of these issues only the last one seems present in your code. In fact, your server doesn't check at all which directories and files the client should be allowed to read - this is a potential issue, a client might be able to read confidential files.