Hi I am trying to write a python script that does exactly what the following command does:
gcloud logging read "logName=projects/[project_name]/logs/[id]"
so when i run that command from the cli it does not give me any error, it outputs the logs as expected.
however when i run my python script:
import argparse
import datetime
import os
import sys
from pprint import pprint
from google.cloud import bigquery
from google.cloud import logging
assert "GOOGLE_APPLICATION_CREDENTIALS" in os.environ
def main():
client = logging.Client()
log_name = 'log_id'
logger = client.logger(log_name)
for entry in logger.list_entries():
print(entry.payload)
if __name__ == "__main__":
main()
I get the error:
google.api_core.exceptions.PermissionDenied: 403 The caller does not have permission
Im not sure what to do here, since the command line runs, i clearly have permission.
any thoughts would be greatly appreciated
I see that you are trying to read and show your logs from Cloud Logging using Python.
From the error code you got:
error: google.api_core.exceptions.PermissionDenied: 403
I think this comes from an authentication problem. I would like to share these documents with you: the Python quickstart to write, read, delete, and export log entries [1]; and authentication on GCE instances [2].
[1] https://cloud.google.com/logging/docs/quickstart-python#linux
[2] https://googleapis.dev/python/google-api-core/latest/auth.html#using-google-compute-engine
Related
I have a simple Python Code that uses Elasticsearch module "curator" to make snapshots.
I've tested my code locally and it works.
Now I want to run it in an AWS Lambda but I have this error :
Unable to import module 'lambda_function': No module named 'error'
Here is how I proceeded :
I created manually a Lambda and gave it a "AISA-BasicLambdaExecutionRole" role. Then I created my package with my function and the dependencies that I installed with the command :
pip install elasticsearch-curator -t /<path>/myRepository
I zipped the content (not the folder) and uploaded it in my Lambda.
I changed the Handler name to "lambda_function.lambda_handler" (my function's name is "lambda_function.py").
Did I miss something ? This is my first time working with Lambda and Python
I've seen the other questions about this error :
"errorMessage": "Unable to import module 'lambda_function'"
But nothing works for me.
EDIT :
Here is my lambda_function :
from __future__ import print_function
import curator
import time
from curator.exceptions import NoIndices
from elasticsearch import Elasticsearch
def lambda_handler(event, context):
es = Elasticsearch()
index_list = curator.IndexList(es)
index_list.filter_by_regex(kind='prefix', value="logstash-")
Number = 1
try:
while Number <= 3:
Name="snapshotLmbd_n_"+ str(Number) +""
curator.Snapshot(index_list, repository="s3-backup", name= Name , wait_for_completion=True).do_action()
Number += 1
print('Just taking a nap ! will be back soon')
time.sleep(30)
except KeyboardInterrupt:
print('My bad ! I interrupted this')
return
Thank you for your time.
Ok, since you have everything else correct, check for the permissions of the python script.
It must have executable permissions (755)
I'm new to Python. This is my first Ansible module in order to delete the SimpleDB domain from ChaosMonkey deletion.
When tested in my local venv with my Mac OS X, it keeps saying
Module unable to decode valid JSON on stdin. Unable to figure out
what parameters were passed.
Here is the code:
#!/usr/bin/python
# Delete SimpleDB Domain
from ansible.module_utils.basic import *
import boto3
def delete_sdb_domain():
fields = dict(
sdb_domain_name=dict(required=True, type='str')
)
module = AnsibleModule(argument_spec=fields)
client = boto3.client('sdb')
response = client.delete_domain(DomainName='module.params['sdb_domain_name']')
module.exit_json(changed = False, meta = response)
def main():
delete_sdb_domain()
if __name__ == '__main__':
main()
And I'm trying to pass in parameters from this file: /tmp/args.json.
and run the following command to make the local test:
$ python ./delete_sdb_domain.py /tmp/args.json
please note I'm using venv test environment on my Mac.
If you find any syntax error in my module, please also point it out.
This is not how you should test your modules.
AnsibleModule expects to have specific JSON as stdin data.
So the closest thing you can try is:
python ./delete_sdb_domain.py < /tmp/args.json
But I bet you have your json file in wrong format (no ANSIBLE_MODULE_ARGS, etc.).
To debug your modules you can use test-module script from Ansible hacking pack:
./hacking/test-module -m delete_sdb_domain.py -a "sdb_domain_name=zzz"
I have a python module called user_module, which resides in a mounted network location.
In a script I'm using, I need to import that module - but due to NFS issues, sometimes this path isn't available until we actually change directory to the relevant one and\or restarting autofs service.
In order to reproduce the issue and try to WA it - I've manually stopped autofs service, and tried to run my script with my WA - (probably not the most elegant one though):
import os
import sys
from subprocess import call
PATH="/some/network/path"
sys.path.append(PATH)
try:
os.chdir(PATH)
import user_module
except:
print "Unable to load user_module, trying to restart autofs service"
call(['service', 'autofs', 'restart'])
os.chdir(PATH)
import user_module # Throws Import error!
But, I still get import error due to path unavabilable.
Now this is what I find weird - On the same machine, I've tried executing the same operations as in my script, with intentionally pre stopping autofs service, and it works perfect -
[root#machine]: service autofs stop # To reproduce the import error
[root#machine]: python
>>> import os
>>> import sys
>>> from subprocess import call
>>> PATH="/some/network/path"
>>> sys.path.append(PATH)
>>> os.chdir(PATH)
######################################################################
################## exception of no such file or directory ############
######################################################################
>>> call(['service', 'autofs', 'restart'])
>>> os.chdir(PATH) # No exception now after restarting the service
>>> import user_module # NO Import error here
Can someone shed some light on the situation
and explain to me why same methodology works through python CLI, but through a script?
What is it that I don't know or what is it that I'm missing here?
Also - How to overcome this?
Thanks
I am running the following script inside AWS Lambda:
#!/usr/bin/python
from __future__ import print_function
import json
import os
import ansible.inventory
import ansible.playbook
import ansible.runner
import ansible.constants
from ansible import utils
from ansible import callbacks
print('Loading function')
def run_playbook(**kwargs):
stats = callbacks.AggregateStats()
playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
runner_cb = callbacks.PlaybookRunnerCallbacks(
stats, verbose=utils.VERBOSITY)
# use /tmp instead of $HOME
ansible.constants.DEFAULT_REMOTE_TMP = '/tmp/ansible'
out = ansible.playbook.PlayBook(
callbacks=playbook_cb,
runner_callbacks=runner_cb,
stats=stats,
**kwargs
).run()
return out
def lambda_handler(event, context):
return main()
def main():
out = run_playbook(
playbook='little.yml',
inventory=ansible.inventory.Inventory(['localhost'])
)
return(out)
if __name__ == '__main__':
main()
However, I get the following error: failed=True msg='boto required for this module'
However, according to this comment(https://github.com/ansible/ansible/issues/5734#issuecomment-33135727), it works.
But, I'm not understanding how do I mention that in my script? Or, can I have a separate hosts file, and include it in the script, like how I call my playbook?
If so, then how?
[EDIT - 1]
I have added the line inventory=ansible.inventory.Inventory('hosts')
with hosts file as:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
But, I get this error: /bin/sh: /usr/local/bin/python: No such file or directory
So, where is python located inside AWS Lambda?
I installed boto just like I installed other packages in the Lambda's deployment package: pip install boto -t <folder-name>
The bash command which python will usually give the location of the Python binary. There's an example of how to call a bash script from AWS Lambda here.
Sample server
I have a python script as mentioned below copied to /var/www/cgi-bin folder with permissions set to 775.
#!/usr/bin/env python
print "Content-type: text/plain\n\n";
print "testing...\n";
import cgitb; cgitb.enable()
import cgi
from jsonrpc import handleCGI, ServiceMethod
import json
from datetime import datetime
#ServiceMethod
def echo():
return "Hello"
if __name__ == "__main__":
handleCGI()
Sample Client
Now, Iam accessing this simple echo service using the below client code.
from jsonrpc import ServiceProxy
import json
s = ServiceProxy(`"http://localhost/cgi-bin/t2.py"`)
print s.echo()
1/ Iam getting the below error when i run the above client. Any thoughts?
2/ Is there any issue with httpd.conf settings?
File "/usr/lib/python2.7/site-packages/jsonrpc/proxy.py", line 43, in __call__
resp = loads(respdata)
File "/usr/lib/python2.7/site-packages/jsonrpc/json.py", line 211, in loads
raise JSONDecodeException('Expected []{}," or Number, Null, False or True')
jsonrpc.json.JSONDecodeException: Expected []{}," or Number, Null, False or True
Note: Iam using the example mentioned at the below link using cgi way of handling json.
http://json-rpc.org/wiki/python-json-rpc
Please let me know.
Thanks!
Santhosh
I know this is super late, but I found this question when I had the same problem. In hopes it helps someone else, I will post my solution.
In my case it was as simple (stupid) as making the python file itself executable. i.e. chmod 755 t2.py