How to start MLMD gRPC server? - python

I am doing a POC with the MLMD library. I want to expose a grpc/REST service on the MLMD schemas.
I am referring this official guide.
https://www.tensorflow.org/tfx/guide/mlmd#use_mlmd_with_a_remote_grpc_server
But I couldn't start the server the way it is mentioned on the above tutorial.
bazel run -c opt --define grpc_no_ares=true //ml_metadata/metadata_store:metadata_store_server
I created a config file(mlmd_server.conf) in my local env.
connection_config {
sqlite {
filename_uri: '/tmp/test_db'
connection_mode: READWRITE_OPENCREATE
}
}
but it didn't work. I have not used bazel before. I am not sure if I need to set some target for this build/run.
bazel run -c opt --define grpc_no_ares=true metadata_store_server_config_file=mlmd_server.conf
Can you please help?

Related

How do I properly set up my Python dependencies within Jenkins using pip and virtualenv?

I am a rookie to Jenkins trying to set up a Python pytest test suite to run. In order to properly execute the test suite, I have to install several Python packages. I'm having trouble with this particular step because Jenkins consistently is unable to find virtualenv and pip:
pipeline {
parameters {
gitParameter branchFilter: 'origin/(.*)', defaultValue: 'master', name: 'BRANCH', type: 'PT_BRANCH', quickFilterEnabled: true
}
agent any
stages {
stage('Checkout source code') {
steps {
checkout([$class: 'GitSCM', branches: [[name: '*/master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: '------', url: 'git#github.com:path-to-my-repo/my-test-repo.git']]])
}
}
stage('Start Test Suite') {
steps {
sh script: 'PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/:$PATH'
echo "Checking out Test suite repo."
sh script: 'virtualenv venv --distribute'
sh label: 'install deps', script: '/Library/Frameworks/Python.framework/Versions/3.6/bin/pip install -r requirements.txt'
sh label: 'execute test suite, exit upon first failure', script: 'pytest --verbose -x --junit-xml reports/results.xml'
post {
always {
junit allowEmptyResults: true, testResults: 'reports/results.xml'
}
}
}
}
On the virtualenv venv --distribute step, Jenkins throws an error (I'm running this initially on a Jenkins server on my local instance, although in production it will be on an Amazon Linux 2 machine):
virtualenv venv --distribute /Users/Shared/Jenkins/Home/workspace/my-project-name#tmp/durable-5045c283/script.sh:
line 1: virtualenv: command not found
Why is this happening? The step before, I make sure to append where I know my virtualenv and pip are:
sh script: 'PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin/:$PATH'
For instance, when I type in
sudo su jenkins
which pip
which virtualenv
I get the following outputs as expected:
/Library/Frameworks/Python.framework/Versions/3.6/bin/pip
/Library/Frameworks/Python.framework/Versions/3.6/bin/virtualenv
Here are the things I do know:
Jenkins runs as a user called jenkins
best practice is to create a virtual environment, activate it, and the perform my pip installations inside there
Jenkins runs sh by default, not bash (but I'm not sure if this has anything to do with my problem)
Why is Jenkins unable to find my virtualenv? What's the best practice for installing Python libraries for a Jenkins build?
Edit: I played around some more and found a working solution:
I don't know if this is the proper way to do it, but I used the following syntax:
withEnv(['PATH+EXTRA=/Library/Frameworks/Python.framework/Versions/3.6/bin/']) {
sh script: "pip install virtualenv"
// do other setup stuff
}
However, I'm now stuck w/ a new issue: I've clearly hardcoded in my Python path here. If I'm running on a remote Linux machine, am I going to have to install that specific version of Python (3.6)?

NixOps - Configure Nginx proxy pass with Python Flask

I am new to Nix and trying to implement a service which passes Python Flask web services though Nginx proxy_pass. This is what I have tried so far.
with import <nixpkgs> {};
let
buildInputs = [
nginx
python35Packages.python
python35Packages.flask
python35Packages.pyyaml
];
installPhase = ''
mkdir -p $out/pynix
cp -rv src config.yml $out/pynix
cd $out/pynix && nohup python src/main.py &> log.txt
'';
in {
network.description = "Local machine";
webserver = {
deployment = {
targetEnv = "virtualbox";
virtualbox.memorySize = 1024;
};
services = {
nginx = {
enable = true;
config = '';
http {
include ${nginx}/conf/mime.types;
server_name localhost;
location / {
proxy_pass http://localhost:5000;
}
}
'';
};
};
};
}
src/main.py is a Python Flask service running on port 5000. How can I achieve this web service up and running when I do nixops deploy -d DEPLOYMENT_NAME? Please help.
I think you've confused a package and a service. The package is the static output of the build while the service manages the run time activation of the package. I think your configuration currently attempts to describe a python app that is run at build time without any service to activate it at run time. This is pretty much the opposite of what you want! Especially as with nixops your are likely running your service in a different environment to where it was built.
You should be able to get an idea of what I mean by looking at the nix expressions for the nginx package and the nginx service -
specifically the section services.systemd.nginx. From here you can see that the nginx service manages the running of the nginx package. I think you will want to write similar expressions for your python app. You could also see if there are expressions specifically for python based NixOS services that you could use as a base, but the nginx expressions should be a sufficient guide too.

How to remote debug python code in a Docker Container with VS Code

I've just registered for this question. It's about if it's possible to remote debug python code in a Docker Container with VS Code.
I am having a completely configured Docker Container here. I got a little bit of help with it, and I'm pretty new to docker anyways. In it runs Odoo v10. But I cant get the remote debug in VS Code to work. I have tried this explanation, but I don't really get it.
Is it even possible? And if yes, how can I get it to work?
I'm running Kubuntu 16.04 with VS Code 1.6.1 and the Python Extension from Don Jayamanne.
Ah yeah and I hope I am at the right location with this question and it's not against any rules.
UPDATE:
Just tried the way of Elton Stoneman. With it I'm getting this error:
There was an error in starting the debug server.
Error = {"code":"ECONNREFUSED","errno":"ECONNREFUSED","syscall":"connect",
"address":"172.21.0.4","port":3000}
My Dockerfile looks like this:
FROM **cut_out**
USER root
# debug/dev settings
RUN pip install \
watchdog
COPY workspace/pysrc /pysrc
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
build-essential \
python-dev \
&& /usr/bin/python /pysrc/setup_cython.py build_ext --inplace \
&& rm -rf /var/lib/apt/lists/*
EXPOSE 3000
USER odoo
The pysrc in my Dockerfile is there because this was intended for working with PyDev (Eclipse) before.
This is the run command I've used:
docker-compose run -d -p 3000:3000 odoo
And this is the important part of my launch.json:
{
"name": "Attach (Remote Debug)",
"type": "python",
"request": "attach",
"localRoot": "${workspaceRoot}",
"remoteRoot": "${workspaceRoot}",
"port": 3000,
"secret": "my_secret",
"host": "172.21.0.4"
}
I hope that's enough information for now.
UPDATE 2:
Alright I found the solution. I totally misunderstood how Docker works and tried it completeley wrong. I already had a completeley configured Docker-compose. So everything I needed to do was to adapt my VS Code configs to the docker-compose.yml. This means that I just had to change the launch.json to the port 8069 (default Odoo port) and just need to use docker-compose up, then the debugging works in VS Code.
Unfortunately the use of ptvsd kinda destroys my Odoo environment, but at least I'm able to debug now. Thanks!
Yes, this is possible - when the Python app is running in a Docker container, you can treat it like a remote machine.
In your Docker image, you'll need to make the remote debugging port available (e.g. EXPOSE 3000 in the Dockerfile), include the ptvsd setup in your Python app, and then publish the port when you run the container, something like:
docker run -d -p 3000:3000 my-image
Then use docker inspect to get the IP address of the running container, and that's what you use for the host in the launch file.
works with vscode 1.45.0 & later. for reference files https://gist.github.com/kerbrose/e646aaf9daece42b46091e2ca0eb55d0
1- Edit your docker.dev file & insert RUN pip3 install -U debugpy. this will install a python package debugpy instead of the deprecated one ptvsd because your vscode (local) will be communicating to debugpy (remote) server of your docker image using it.
2- Start your containers. however you will be starting the python package that you just installed debugpy. it could be as next command from your shell.
docker-compose run --rm -p 8888:3001 -p 8879:8069 {DOCKER IMAGE[:TAG|#DIGEST]} /usr/bin/python3 -m debugpy --listen 0.0.0.0:3001 /usr/bin/odoo --db_user=odoo --db_host=db --db_password=odoo
3- Prepare your launcher file as following. please note that port will be related to odoo server. debugServer will be the port for the debug server
{
"name": "Odoo: Attach",
"type": "python",
"request": "attach",
"port": 8879,
"debugServer": 8888,
"host": "localhost",
"pathMappings": [
{
"localRoot": "${workspaceFolder}",
"remoteRoot": "/mnt/extra-addons",
}
],
"logToFile": true
}
If you want a nice step by step walkthrough of how to attach a remote debugger for VS code in a container you could check out the youtube video "Debugging Python in Docker using VSCode".
He also talks about how to configure the Docker file such that the container does not includes the debugger when run in production mode.

GCE module in Ansible cannot find apache-libcloud although gce.py works

I installed ansible, apache-libcloud with pip. Also, I can use the gcloud cli and ansible works for any non-gce-related playbooks.
When using the gce module as a task to create instances in an ansible playbook, the following error occurs:
TASK: [Launch instances] ******************************************************
<127.0.0.1> REMOTE_MODULE gce instance_names=mm2 machine_type=f1-micro image=ubuntu-1204-precise-v20150625 zone=europe-west1-d service_account_email= pem_file=../pkey.pem project_id=fancystuff-11
<127.0.0.1> EXEC ['/bin/sh', '-c', 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889 && echo $HOME/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889']
<127.0.0.1> PUT /var/folders/v4/ll0_f8lj7yl7yghb645h95q9ckfc19/T/tmpyDoPt9 TO /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce
<127.0.0.1> EXEC ['/bin/sh', '-c', u'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/gce; rm -rf /Users/d046179/.ansible/tmp/ansible-tmp-1437669562.03-233461447935889/ >/dev/null 2>&1']
failed: [localhost -> 127.0.0.1] => {"failed": true, "parsed": false}
failed=True msg='libcloud with GCE support (0.13.3+) required for this module'
FATAL: all hosts have already failed -- aborting
And the site.yml of the playbook I wrote:
name: Create a sandbox instance
hosts: localhost
vars:
names: mm2
machine_type: f1-micro
image: ubuntu-1204-precise-v20150625
zone: europe-west1-d
service_account_email: xxx#developer.gserviceaccount.com
pem_file: ../pkey.pem
project_id: fancystuff-11
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}} service_account_email={{ service_account_email }}
pem_file={{ pem_file }} project_id={{ project_id }}
register: gce
The gce cloud module fails with the error message "ibcloud with GCE support (0.13.3+) required for this module".
However, running gce.py from the ansible github repo works. The python script finds the apache-libcloud library and prints a json with all running instances. Besides, pip install apache-libcloud states it is installed properly.
Is there anything I am missing like an environment variable that points to the python libraries (PYTHONPATH)?
UPDATE 1:
I included the following task before the gce task:
- name: install libcloud
pip: name=apache-libcloud
This also does not affect the behavior nor prevents any error messages.
Update 2:
I added the following task to inspect the available PYTHONPATH:
- name: Getting PYTHONPATH
local_action: shell python -c 'import sys; print(":".join(sys.path))'
register: pythonpath
- debug:
msg: "PYTHONPATH: {{ pythonpath.stdout }}"
The following is returned:
PYTHONPATH: :/usr/local/lib/python2.7/site-packages/setuptools-17.1.1-py2.7.egg:/usr/local/lib/python2.7/site-packages/pip-7.0.3-py2.7.egg:/usr/local/lib/python2.7/site-packages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload:/usr/local/lib/python2.7/site-packages:/Library/Python/2.7/site-packages
UPDATE 3:
I introduced my own test.py script as a task which executes the same apache-libcloud imports as the gce ansible module. The script imports just fine!!!
Setting the PYTHONPATH fixes the issue. For example:
$ export PYTHONPATH=/usr/local/lib/python2.7/site-packages/
I'm using OSX and I solved this for myself. Short answer: install ansible with pip. (rather than e.g. brew)
I inspected the PYTHONPATH that Ansible sets runtime and it looked like it had nothing to do whith my normal system PYTHONPATH. E.g. for me, my system PYTHONPATH was empty, and setting that like e.g. mlazarov suggested didn't make any difference. I made ansible print the PYTHONPATH it uses runtime, and it looked like this:
ok: [localhost] => {
"msg": "PYTHONPATH: :/usr/local/Cellar/ansible/1.9.4/libexec/lib/python2.7/site-packages:/usr/local/Cellar/ansible/1.9.4/libexec/vendor/lib/python2.7/site-packages:/Library/Frameworks/Python.framework/Versions/3.4/lib/python34.zip:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/plat-darwin:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/lib-dynload:/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages"
}
So there's only ansible's own site-packages and some strange Python3 installations (I'm using python2.7)
Something in this discussion made me think it might be a problem with the ansible installation, my ansible was installed with brew. I reinstalled it globally with pip (simply running sudo pip install ansible), and that fixed the problem. Now the PYTHONPATH ansible prints looks much better, with my virtualenv python installation in the beginning, and no more "libcloud with GCE support (0.13.3+) required for this module".
I was able to resolve the issue by setting the PYTHONPATH environment variable (export PYTHONPATH=/path/to/site-packages) with the current site-packages folder. Apparently, ansible establishes its own environment during module execution and ignores any paths available in python except the paths from the environment variable PYTHONPATH.
I find this a peculiar behavior which is not documented on the ansible websites.
I have a similar environment setup. I found some information at the bottom of this section: https://github.com/jlund/streisand#prerequisites
Essentially there's some magic files you can update so the brew'd ansible will add a folder to search for packages:
mkdir -p ~/Library/Python/2.7/lib/python/site-packages
echo '/usr/local/lib/python2.7/site-packages' > ~/Library/Python/2.7/lib/python/site-packages/homebrew.pth
Hope that fixes it for you!
In my case it was the case of:
pip install apache-libcloud

How to automatically run tests when there's any change in my project (Django)?

At the moment I am running python manage.py test every once in a while after I make significant changes in my django project. Is it possible to run those tests automatically whenever I change and save a file in my project? It'll be useful to detect bugs earlier (I know rails has something like this with rspec). I am using nose and django-nose. Thanks in advance.
Use entr:
$ brew install entr
$ find . -name '*.py' | entr python ./manage.py test
Or, for extra credit, combine it with ack:
$ ack --python | entr python ./manage.py test
If you want it to even find new files as you add them:
$ until ack -f --python | entr -d python ./manage.py test; do sleep 1; done
py.test answer (which also works for nose):
pip install pytest-xdist
py.test -f # will watch all subfolders for changes, and rerun the tests
Since py.test understands nose, this works for nose too.
I'm a JavaScript developer so I used the tools JS developer have built with Node.js to achieve the same goal in my projects. It is very simple but you also need to install nodeJS to get it working.
I created a file called gruntfile.js in my project root directory:
//gruntfile.js
module.exports = function(grunt) {
grunt.initConfig({
watch: {
files: ['*.py'],
tasks: ['shell']
},
shell: {
test: {
command: 'python main_test.py'
}
}
});
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.loadNpmTasks('grunt-shell');
grunt.registerTask('default', ['watch']);
};
What it's doing is basically watching any file in that directory that has a py extension and if they changed it execute a shell command which in this case is my python test (you might wanna change it, my test name was main_test.py). In order to run this grunt script you need to install Node.js and after that you will have npm in your global path. after that you need to insall a few node modules as well. All these modules except grunt-cli will be stored in your current folder so make sure you are at the root of your project or what ever folder you put that gruntfile.js in. then run the fallowing commands.
npm install grunt-cli -g
npm install grunt
npm install grunt-contrib-watch
npm install grunt-shell
Don't worry about the size, these are very small modules. Now that you have every thing setup you can simply run grunt and it will start watching your py files and when you saved them it will run your tests. It may not be best way for running python tests but as I said I'm a JavaScript developer and I think Grunt has provided a very simple way of executing tests even for other languages so I use it.
I just tried nose-watch and it worked fantastic! install the plugin and run the test with the --with-watch option.
Update: :( it does not seem to work well when running tests from django-nose's manage.py helper.
Eventually I opted to use tdaemon, which supports django, although might require a bit of fiddling as well for full fledged projects.
For example here is how I ran it for my django project:
tdaemon -t django --custom-args=a_specific_app_to_test -d --ignore-dirs logs
The --custom-args was to focus tests to specific app (same as you would do python manage.py test a_specific_app_to_test
The -d argument is to enable debug logging, which prints which file change triggered the run.
The --ignore-dirs was necessary because my tests wrote to the logs (which in itself is a problem!) and tdaemon went into an infinite loop.
Another Javascript dev here, I've found nodemon (https://github.com/remy/nodemon) to work pretty well. By default it watches *.js files but that's configurable with the --ext flag. To use it, do:
npm install -g nodemon
cd /your/project/dir/
nodemon --ext py --exec "python manage.py test"
Now, whenever a *.py file changes, it'll re-run your command. It even finds new files.
I'm used watchr, something like Watchr
I did this using gulp. Install gulp-shell:
npm install gulp-shell --save-dev
And in the gulpfile.js:
var shell = require('gulp-shell')
gulp.task('run-tests', shell.task([
'python3 manage.py test']))
gulp.task('watch', function(){
gulp.watch(['./your-project-name/**/*.html', './your-project-name/**/*.py'], ['run-tests']);
});
gulp.task('default',['run-tests','watch']);
And it runs the tests anytime there are changes saved to any .py or .html files!
You can use Django Supervisor on top of Django. This will avoid the use of a CI tool (which may be useful in any case, this isn't invalidating the other response - maybe just complementary).
I would recommend setting up django-nose and sniffer. It's quite easy to setup and works great. Something along the lines of this scent.py was the only customization I needed. Then you can just run sniffer -x myapp.tests.
Nose comes with some other goodies that make tests a bit nicer to work with as well.
if you use git control code, another way to is use git hook pre-commit
maybe error like remote: fatal: Not a git repository: '.', check this post https://stackoverflow.com/a/4100577/7007942
I've found the easiest way is to use gulp as recommended by this post. Note that gulp-shell (recommended by other answers) is actually blacklisted by gulp, but thankfully you don't even need a plugin. Try this instead:
// gulpfile.js
const { watch } = require('gulp')
var exec = require('child_process').exec
function test (cb) {
exec('python manage.py test', function (err, stdout, stderr) {
console.log(stdout)
console.log(stderr)
cb(err)
})
}
exports.default = function () {
watch('./**/*.py', test)
}
In the past, I've tried many of the options suggested in other answers. This one was comparatively painless. It's helpful if you have some knowledge of JavaScript.
I wrote a Gulp task to automatically run ./manage.py test whenever any specified Python files are changed or removed. You'll need Node for this.
First, install Gulp:
yarn add -D gulp#next
Then use the following gulpfile.js and make any necessary adjustments:
const { exec } = require('child_process');
const gulp = require('gulp');
const runDjangoTests = (done) => {
const task = exec('./manage.py test --keepdb', {
shell: '/bin/bash', // Accept SIGTERM signal. Doesn't work with /bin/sh
});
task.stdout.pipe(process.stdout);
task.stderr.pipe(process.stderr);
task.on('exit', () => {
done();
});
return task;
};
gulp.task('test', runDjangoTests);
gulp.task('watch', (done) => {
let activeTask;
const watcher = gulp.watch('**/*.py', {
// Ignore whatever directories you need to
ignored: ['.venv/*', 'node_modules'],
});
const runTask = (message) => {
if (activeTask) {
activeTask.kill();
console.log('\n');
}
console.log(message);
activeTask = runDjangoTests(done);
};
watcher.on('change', (path) => {
runTask(`File ${path} was changed. Running tests:`);
});
watcher.on('unlink', (path) => {
runTask(`File ${path} was removed. Running tests:`);
});
});
Then simply run node node_modules/gulp/bin/gulp.js watch to run the task :)
https://pycrunch.com/
One of key features - run only impacted tests on file changes. Available as plugin for PyCharm
On ubuntu this script works, after installing inotifywait (sudo apt install inotify-tools):
inotifywait -qrm -e close_write * |
while read -r filename event; do
python manage.py test accord
done
Similar to entr you can use watchexec with
brew install watchexec
watchexec -rc -e py -- python ./manage.py test
with options
-r # restart
-c # clear screen
-e extensions # list of file extensions to filter by
You'll need a continuous integration server, something like Jenkins.

Categories