When I'm trying to integrate python logging with airbrake, I get following errors:
main.py
import pybrake
from config2.config import config
airbrake_handler = None
def filter_airbrake_msgs(notice):
if config.environment in ['production', 'staging']:
return notice
return None
def config_airbrake():
print(config.py_brake)
notifier = pybrake.Notifier(
project_id=config.py_brake.project_id,
project_key=config.py_brake.project_key
)
config_airbrake()
ENV=development python3 main.py
Error :
ERROR pybrake get_git_revision failed: [Errno 2] No such file or directory: '/user/xxx/xxx/xxx/.git/HEAD'
That looks like a log message coming from https://github.com/airbrake/pybrake/blob/master/pybrake/git.py#L12. I see why it is confusing but it is actually harmless and you can ignore it. I've created an issue to remove that log message. And feel free to use Github issues for such questions in future.
Overall pybrake checks if directory stored in context.rootDirectory contains a Git folder. If there is a git folder there it tries to extract some info like git revision, checkout date etc. Otherwise it logs the first error.
Related
I'm trying to import a custom module that I created, but it breaks my API just to import it.
data directory:
-src
--order
----__init__.py
----app.py
----validator.py
----requirments.txt
--__init__.py
on my app.py I have this code:
import json
from .validator import validate
def handler(event, context):
msg = ''
if event['httpMethod'] == 'GET':
msg = "GET"
elif event['httpMethod'] == 'POST':
pass #msg = validate(json.loads(event['body']))
return {
"statusCode": 200,
"body": json.dumps({
"message": msg,
}),
}
I get this error:
Unable to import module 'app': attempted relative import with no known parent package
However, if I remove line 2 (from .validator import validate) from my code, it works fine, so the problem is with that import, and honestly, I can't figure what is going on. I have tried to import using:
from src.order.validator import validate
but it doesn't work either.
was able to solve my issue by generating a build through the command: sam build, and zipping my file, and putting it on the root folder inside aws-sam, it's not a great solution because I have to rebuild at every small change, but at least it's a workaround for now
It seems app.py has not been loaded as part of the package hierarchy (i.e. src and order packages have not been loaded). You should be able to run
from src.order import app
from the parent directory of src and your code will work. If you run python app.py from the terminal — which I assume is what you did — app.py will be run as a standalone script — not as part of a package.
However, I do not believe you need the .validator in your case since both modules are in the same directory. You should be able to do
from validator import validate
I am using Fabric2 version and I don't see It has exist method in it to check if folder path has existed in the remote server. Please let me know how can I achieve this in Fabric 2 http://docs.fabfile.org/en/stable/.
I have seen a similar question Check If Path Exists Using Fabric, But this is for fabric 1.x version
You can execute the test command remotely with the -d option to test if the file exist and is a directory while passing the warn parameter to the run method so the execution doesn't stop in case of a non-zero exit status code. Then the value failed on the result will be True in case that the folder doesn't exist and False otherwise.
folder = '/path/to/folder'
if c.run('test -d {}'.format(folder), warn=True).failed:
# Folder doesn't exist
c.run('mkdir {}'.format(folder))
exists method from fabric.contrib.files was moved to patchwork.files with a small signature change, so you can use it like this:
from fabric2 import Connection
from patchwork.files import exists
conn = Connection('host')
if exists(conn, SOME_REMOTE_DIR):
do_something()
The below code is to check the existence of the file (-f), just change to '-d' to check the existence of a directory.
from fabric import Connection
c = Connection(host="host")
if c.run('test -f /opt/mydata/myfile', warn=True).failed:
do.thing()
You can find it in the Fabric 2 documentation below:
https://docs.fabfile.org/en/2.5/getting-started.html?highlight=failed#bringing-it-all-together
Hi That's not so difficult, you have to use traditional python code to check if a path already exists.
from pathlib import Path
from fabric import Connection as connection, task
import os
#task
def deploy(ctx):
parent_deploy_dir = '/var/www'
deploy_dir ='/var/www/my_folder'
host = 'REMOTE_HOST'
user = 'USER'
with connection(host=host, user=user) as c:
with c.cd(parent_deploy_dir):
if not os.path.isdir(Path(deploy_dir)):
c.run('mkdir -p ' + deploy_dir)
I use App Engine Flexible Environment (previously called Managed VMs), and recently upgraded to the latest gcloud SDK. It included some new errors:
ERROR: (gcloud.preview.app.deploy) Error Response: [400] Invalid
character
in filename: lib/setuptools/script (dev).tmpl
ERROR: The [application] field is specified in file [.../app.yaml]. This field is not used
by gcloud and must be removed. Project name should instead be
specified either by `gcloud config set project MY_PROJECT` or by
setting the `--project` flag on individual command executions.
ERROR: (gcloud.preview.app.deploy) There is a Dockerfile in the
current directory, and the runtime field in
.../app.yaml is currently set to
[runtime: python27]. To use your Dockerfile to build a custom runtime,
set the runtime field in .../app.yaml
to [runtime: custom]. To continue using the [python27] runtime, please
omit the Dockerfile from this directory.
I fixed these errors and was able to publish again, but started seeing errors like this:
Failed to import google/appengine/ext/deferred/handler.py
Traceback (most recent call last):
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/meta_app.py", line 549, in GetUserAppAndServe
app, mod_file = self.GetUserApp(script)
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/meta_app.py", line 410, in GetUserApp
app = _AppFrom27StyleScript(script)
File "/home/vmagent/python_vm_runtime/google/appengine/ext/vmruntime/meta_app.py", line 270, in _AppFrom27StyleScript
app, filename, err = wsgi.LoadObject(script)
File "/home/vmagent/python_vm_runtime/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
ImportError: Import by filename is not supported.
After a bit of digging, I figured out what is going on. Namely, the code that processes this:
builtins:
- remote_api: on
- appstats: on
- deferred: on
is broken with Managed VMs. The correct fix is to eliminate these and inline the builtin includes instead. You can find the relevant includes inside these subdirectories:
In my case, it was to add this to my handlers: directive:
- url: /_ah/queue/deferred
script: google.appengine.ext.deferred.application
login: admin
- url: /_ah/stats.*
script: google.appengine.ext.appstats.ui.app
- url: /_ah/remote_api(/.*)?
script: google.appengine.ext.remote_api.handler.application
As to why, you can understand more here. In google/appengine/ext/builtins/__init__.py#L92, it attempts to find the relevant include file by using the runtime: field in your app.yaml. This means where it previously looked up deferred/include-python27.yaml, it now attempts to find deferred/include-custom.yaml (due to fixing the errors above) and fails. So now it defaulting to deferred/include.yaml, which lists the include scrip by path-name instead of module-name. This then breaks in a python27-custom-VM setup (since it is expecting/needing module-names)
Original Question
I've got some python scripts which have been using Amazon S3 to upload screenshots taken following Selenium tests within the script.
Now we're moving from S3 to use GitHub so I've found GitPython but can't see how you use it to actually commit to the local repo and push to the server.
My script builds a directory structure similar to \images\228M\View_Use_Case\1.png in the workspace and when uploading to S3 it was a simple process;
for root, dirs, files in os.walk(imagesPath):
for name in files:
filename = os.path.join(root, name)
k = bucket.new_key('{0}/{1}/{2}'.format(revisionNumber, images_process, name)) # returns a new key object
k.set_contents_from_filename(filename, policy='public-read') # opens local file buffers to key on S3
k.set_metadata('Content-Type', 'image/png')
Is there something similar for this or is there something as simple as a bash type git add images command in GitPython that I've completely missed?
Updated with Fabric
So I've installed Fabric on kracekumar's recommendation but I can't find docs on how to define the (GitHub) hosts.
My script is pretty simple to just try and get the upload to work;
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
import os
def git_server():
env.hosts = ['github.com']
env.user = 'git'
env.passowrd = 'password'
def test():
process = 'View Employee'
os.chdir('\Work\BPTRTI\main\employer_toolkit')
with cd('\Work\BPTRTI\main\employer_toolkit'):
result = local('ant viewEmployee_git')
if result.failed and not confirm("Tests failed. Continue anyway?"):
abort("Aborting at user request.")
def deploy():
process = "View Employee"
os.chdir('\Documents and Settings\markw\GitTest')
with cd('\Documents and Settings\markw\GitTest'):
local('git add images')
local('git commit -m "Latest Selenium screenshots for %s"' % (process))
local('git push -u origin master')
def viewEmployee():
#test()
deploy()
It Works \o/ Hurrah.
You should look into Fabric. http://docs.fabfile.org/en/1.4.1/index.html. Automated server deployment tool. I have been using this quite some time, it works pretty fine.
Here is my one of the application which uses it, https://github.com/kracekumar/sachintweets/blob/master/fabfile.py
It looks like you can do this:
index = repo.index
index.add(['images'])
new_commit = index.commit("my commit message")
and then, assuming you have origin as the default remote:
origin = repo.remotes.origin
origin.push()
Looking at using psphere. I've successfully installed it via pip but run into an issue when I execute the following code (taken from the psphere documentation site):
from psphere.client import Client
from psphere.managedobjects import HostSystem
Client = Client("server", "username", "password")
hs_list = HostSystem.all(Client)
len(hs_list)
After running that command I get the following:
"could not found expected ':'", self.get_mark())
yaml.scanner.ScannerError: while scanning a simple key
in "C:\Users\thor/.psphere/config.yaml", line 5, column 1
could not found expected ':'
in "C:\Users\thor/.psphere/config.yaml", line 6, column 1
There was no .psphere directory or config.yaml. I created both and no joy (although I must be honest - I don't really know what should be in the yaml file.)
There are no : in my file so I don't know why it's raising an exception.
Help would be appreciated. Thanks.
in the distribution's examples/ directory there is a sample_config.yaml file you can use.
# Copy this file into ~/.psphree/config.yaml and edit to your liking
general:
server: your.esxserver.com
username: Administrator
password: strongpassword
template_dir: ~/.psphere/templates/
logging:
destination: ~/.psphere/psphere.log
level: INFO # DEBUG, INFO, etc