How to run the code in an __init__.py - python

I have found some code that I think will allow me to communicate with my Helios Heat recovery unit. I am relatively new to Python (but not coding in general) and I really cannot work out how to use this code. It is obviously written for smarthome.py but I'd like to use it from the command line.
I can also see that the way this file is constructed is probably not the best way to construct an __init__.py but I'd like to try and use it first.
So, how do I run this code? https://github.com/mtiews/smarthomepy-helios
Cheers

After git clone https://github.com/mtiews/smarthomepy-helios.git: either
invoke python with the __init__.py script as argument:
python smarthomepy-helios/__init__.py
or
make the __init__.py executable and run it:
chmod u+x smarthomepy-helios/__init__.py
smarthomepy-helios/__init__.py
Running it either way gives me
2016-02-20 18:07:51,791 - root - ERROR - Helios: Could not open /dev/ttyUSB0.
Exception: Not connected
But passing --help I get some nice synopsis:
$> python smarthomepy-helios/__init__.py --help
usage: __init__.py [-h] [-t PORT] [-r READ_VAR] [-w WRITE_VAR] [-v VALUE] [-d]
Helios ventilation system commandline interface.
optional arguments:
-h, --help show this help message and exit
-t PORT, --tty PORT Serial device to use
-r READ_VAR, --read READ_VAR
Read variables from ventilation system
-w WRITE_VAR, --write WRITE_VAR
Write variable to ventilation system
-v VALUE, --value VALUE
Value to write (required with option -v)
-d, --debug Prints debug statements.
Without arguments all readable values using default tty will be retrieved.

Related

Advance Scripting inside a DockerFile

I am trying to create a Docker image/container that will run on Windows 10/Linux and test a REST API. Is it possible to embed the function (from my .bashrc file) inside the DockerFile? The function pytest calls pylint before running the .py file. If the rating is not 10/10, then it prompts the user to fix the code and exits. This works fine on Linux.
Basically here is the pseudo-code inside the DockerFile I am attempting to build an image.
------------------------------------------
From: Ubuntu x.xx
install python
Install pytest
install pylint
copy test_file to the respective folder
Execute pytest test_file_name.py
if the rating is not 10\10:
prompt the user to resolve the rating issue and exit
------------here is the partial code snippet from the func------------------------
function pytest () {
argument1="$1"
# Extract the path and file name for pylint when method name is passed
pathfilename=`echo ${argument1} | sed 's/::.*//'`
clear && printf '\e[3J'
output=$(docker exec -t orch-$USER pylint -r n ${pathfilename})
if (echo "$output" | grep 'warning.*error' o&>/dev/null or
echo "${output}" | egrep 'warning|convention' &>/dev/null)
then
echo echo "${output}" | sed 's/\(warning\)/\o033[33m\1\o033[39m/;s/\(errors\|error\)/\o033[31m\1\o033[39m/'
YEL='\033[0;1;33m'
NC='\033[0m'
echo -e "\n ${YEL}Fix module as per pylint/PEP8 messages to achieve 10/10 rating before pusing to github\n${NC}"`
fi
Another option I can think of:
Step 1] Build the image (using DockerFile) with all the required software
Step 2] In a .py file, add the call for execution of pytest with the logic from the function.
Your thoughts?
You can turn that function into a standalone shell script. (Pretty much by just removing the function wrapper, and taking out the docker exec part of the tool invocation.) Once you've done that, you can COPY the shell script into your image, and once you've done that, you can RUN it.
...
COPY pylint-enforcer.sh .
RUN chmod +x ./pylint-enforcer.sh \
&& ./pylint-enforcer.sh
...
It looks like pylint will produce a non-zero exit code if it emits any messages. For purposes of a Dockerfile, it may be enough to just RUN pylint -r -n .; if it prints anything, it looks like it will return a non-zero exit code, which docker build will interpret as "failure" and not proceed.
You might consider whether you'll ever want the ability to build and push an image of code that isn't absolutely perfect (during a production-down event, perhaps), and whether you want to require root-level permissions to run simple code-validity tools (if you can docker anything you can edit arbitrary files on the host as root). I'd suggest running these tools out of a non-Docker virtual environment during your CI process, and neither place them in your Dockerfile nor depend on docker exec to run them.

ERROR: cluster.name is not a recognized option in elasticsearch

I am trying to set up elasticsearch with python and as part of that I am trying to do a search from youtube data (as a sample).
I am using Windows 10 X64 machine with elasticsearch 6.5.4.
When I run the following command, I am getting an error
PS C:\Users\XXXXX\elasticsearch-6.5.4\bin> .\elasticsearch cluster.name=youtube node.name=video
starts elasticsearch
- Option Description
------ -----------
-E <KeyValuePair> Configure a setting
-V, --version Prints elasticsearch version information and exits
-d, --daemonize Starts Elasticsearch in the background
-h, --help show help
-p, --pidfile <Path> Creates a pid file in the specified path on start
-q, --quiet Turns off standard output/error streams logging in console
-s, --silent show minimal output
-v, --verbose show verbose output
ERROR: Positional arguments not allowed, found [cluster.name=youtube, node.name=video]
It is mentioned in the usage/help info that you need to pass -E to set configurations:
-E Configure a setting
It is also mentioned in their Getting Started / Installation steps:
As mentioned previously, we can override either the cluster or node
name. This can be done from the command line when starting
Elasticsearch as follows:
./elasticsearch -Ecluster.name=my_cluster_name -Enode.name=my_node_name

passing command line options having multiple suboptions to python script -- shell script

I have the following options:
python runscript.py -O start -a "-a "\"-o \\\"-f/dev/sda1 -b256k -Q8\\\" -l test -p maim\""
runscript takes -O and -a and then sends remaining to shell script 1
shell script 1 takes option -a and should consider remaining \"-o \\\"-f/dev/sda1 -b256k -Q8\\\" -l test -p maim\" as argument to shell script 2
shell script 2 takes argument -o, -l and -p.
Can anyone please help me with this kind of scenario. I am stuck where shell script 1 considers and starts parsing argument -o too.
Is there a simple way to do. The hierarchy of shell script 1 calling 2 should be maintained.
Regards
Sai
The command you gave is bit confusing. I am generalizing the scenario. Is this something you meant?
python runscript.py -p1 v1 -p2 v2 -p3 v3
runscript.py will take all given parameters.
and calls shellsript_1.sh with selected params say -p2 v2
and then calls shellscript_2.sh with remaining params say -p3 v3
We may need more accurate explanation of the problem.
The conventional way to do this in UNIX is to split the argument list about a pivot (usually --) such that the main script consumes all the arguments to the left of the pivot and leaves the remaining arguments for the other script(s). If you have flexibility in your calling function, I'd recommend doing it this way.
So, if runscript.py and both shell scripts all need to consume a separate argument list, your command line would look something like this:
python runscript.py <args for runscript> -- <args for 1st script> -- <args for 2nd script>
For example (I'm just guessing at your hierarchy):
python runscript.py -O start -- -l test -p maim -- -f/dev/sda1 -b256k -Q8

Paver Unknown task error

I am getting started with paver and I am not getting a basic command to run, I am not sure what I am missing.
Docs
Link to the documentation
Installation
pip install paver
After installation I am able to execute paver in commandline.
---> paver.tasks.help
Usage: paver [global options] taskname [task options] [taskname [taskoptions]]
Options:
--version show program's version number and exit
-n, --dry-run don't actually do anything
-v, --verbose display all logging output
-q, --quiet display only errors
-i, --interactive enable prompting
-f FILE, --file=FILE read tasks from FILE [pavement.py]
-h, --help display this help information
--propagate-traceback
propagate traceback, do not hide it under
BuildFailure(for debugging)
-x COMMAND_PACKAGES, --command-packages=COMMAND_PACKAGES
list of packages that provide distutils commands
Tasks from paver.misctasks:
generate_setup - Generates a setup.py file that uses paver behind the scenes
minilib - Create a Paver mini library that contains enough for a simple
pavement.py to be installed using a generated setup.py
paverdocs - Open your web browser and display Paver's documentation.
Tasks from paver.tasks:
help - This help display.
paverlib > tasks.py
#task
def testpaver():
from nose.tools import set_trace;set_trace()
paverlib > _ init _.py
import tasks
Run
paver testpaver
Build failed: Unknown task: testpaver
What I am missing?
By default, Paver looks for pavement.py as a source for task definitions.
Have you tried putting testpaver in there?

how to load python script in interactive shell

I am trying sudo python get_gps.py -c and expecting it to load the script and then present the interactive shell to debug the script live as opposed to typing it in manually.
From the docs:
$ python --help
usage: /usr/bin/python2.7 [option] ... [-c cmd | -m mod | file | -] [arg] ...
Options and arguments (and corresponding environment variables):
-B : don't write .py[co] files on import; also PYTHONDONTWRITEBYTECODE=x
-c cmd : program passed in as string (terminates option list)
-d : debug output from parser; also PYTHONDEBUG=x
-E : ignore PYTHON* environment variables (such as PYTHONPATH)
-h : print this help message and exit (also --help)
-i : inspect interactively after running script; forces a prompt even
if stdin does not appear to be a terminal; also PYTHONINSPECT=x
use -i option

Categories