I want to write a new service for Jupyter Notebook. But I'm having trouble figuring out how to get it to run. I've created a service similar to the default services found here https://github.com/jupyter/notebook/tree/master/notebook/services.
I'm attempting to run it in a Docker container built from jupyter/base-notebook. I've added c.NotebookApp.extra_services = ['TestHandler'] to the Notebook config and I've copied my service to /opt/conda/lib/python3.6/site-packages/notebook/services/test.py.
When I start the Notebook server I get an error saying ModuleNotFoundError: No module named 'TestHandler' so obviously my service is not being loaded correctly. Unfortunately I can't find any documentation on how to load a service in Jupyter Notebook.
This is my test.py service:
import json
from tornado import web
from ...base.handlers import APIHandler
class TestHandler(APIHandler):
#web.authenticated
def get(self):
res = { "foo": "bar" }
self.finish(json.dumps(res))
default_handlers = [
(r"/api/test", TestHandler),
]
That config value expects a list of importable modules which have a default_handlers attribute. You can therefore either use a full path:
# assuming your `test.py` lives in a top level package `mypackage`
c.NotebookApp.extra_services = ['mypackage.test']
Alteratively you can directly construct a module:
import sys
from types import ModuleType
name = 'some_long_ass_name_that_doesnt_conflict'
sys.modules[name] = ModuleType(name) # make the import machinery find it
sys.modules[name].default_handlers = [...]
c.NotebookApp.extra_services = [name]
Related
I'm trying to import a custom module that I created, but it breaks my API just to import it.
data directory:
-src
--order
----__init__.py
----app.py
----validator.py
----requirments.txt
--__init__.py
on my app.py I have this code:
import json
from .validator import validate
def handler(event, context):
msg = ''
if event['httpMethod'] == 'GET':
msg = "GET"
elif event['httpMethod'] == 'POST':
pass #msg = validate(json.loads(event['body']))
return {
"statusCode": 200,
"body": json.dumps({
"message": msg,
}),
}
I get this error:
Unable to import module 'app': attempted relative import with no known parent package
However, if I remove line 2 (from .validator import validate) from my code, it works fine, so the problem is with that import, and honestly, I can't figure what is going on. I have tried to import using:
from src.order.validator import validate
but it doesn't work either.
was able to solve my issue by generating a build through the command: sam build, and zipping my file, and putting it on the root folder inside aws-sam, it's not a great solution because I have to rebuild at every small change, but at least it's a workaround for now
It seems app.py has not been loaded as part of the package hierarchy (i.e. src and order packages have not been loaded). You should be able to run
from src.order import app
from the parent directory of src and your code will work. If you run python app.py from the terminal — which I assume is what you did — app.py will be run as a standalone script — not as part of a package.
However, I do not believe you need the .validator in your case since both modules are in the same directory. You should be able to do
from validator import validate
I am trying to implement hostname like module and my target machine in an amazon-ec2. But When I am running the script its giving me below error:
[ansible-user#ansible-master ~]$ ansible node1 -m edit_hostname.py -a node2
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
My module is like this:
#!/usr/bin/python
from ansible.module_utils.basic import *
try:
import json
except ImportError:
import simplejson as json
def write_to_file(module, hostname, hostname_file):
try:
with open(hostname_file, 'w+') as f:
try:
f.write("%s\n" %hostname)
finally:
f.close()
except Exception:
err = get_exception()
module.fail_json(msg="failed to write to the /etc/hostname file")
def main():
hostname_file = '/etc/hostname'
module = AnsibleModule(argument_spec=dict(name=dict(required=True, type=str)))
name = module.params['name']
write_to _file(module, name, hostname_file)
module.exit_json(changed=True, meta=name)
if __name__ == "__main__":
main()
I don't know where I am making the mistake. Any help will be greatly appreciated. Thank you.
When developing a new module, I would recommend to use the boilerplate described in the documentation. This also shows that you'll need to use AnsibleModule to define your arguments.
In your main, you should add something like the following:
def main():
# define available arguments/parameters a user can pass to the module
module_args = dict(
name=dict(type='str', required=True)
)
# seed the result dict in the object
# we primarily care about changed and state
# change is if this module effectively modified the target
# state will include any data that you want your module to pass back
# for consumption, for example, in a subsequent task
result = dict(
changed=False,
original_hostname='',
hostname=''
)
module = AnsibleModule(
argument_spec=module_args
supports_check_mode=False
)
# manipulate or modify the state as needed (this is going to be the
# part where your module will do what it needs to do)
result['original_hostname'] = module.params['name']
result['hostname'] = 'goodbye'
# use whatever logic you need to determine whether or not this module
# made any modifications to your target
result['changed'] = True
# in the event of a successful module execution, you will want to
# simple AnsibleModule.exit_json(), passing the key/value results
module.exit_json(**result)
Then, you can call the module like so:
ansible node1 -m mymodule.py -a "name=myname"
ERROR! this task 'edit_hostname.py' has extra params, which is only allowed in the following modules: meta, group_by, add_host, include_tasks, import_role, raw, set_fact, command, win_shell, import_tasks, script, shell, include_vars, include_role, include, win_command
As explained by your error message, an anonymous default parameter is only supported by a limited number of modules. In your custom module, the paramter you created is called name. Moreover, you should not include the .py extension in the module name. You have to call your module like so as an ad-hoc command:
$ ansible node1 -m edit_hostname -a name=node2
I did not test your module code so you may have further errors to fix.
Meanwhile, I still strongly suggest you use the default boilerplate from the ansible documentation as proposed in #Simon's answer.
I have created a simple HTTP trigger-based azure function in python which is calling another python script to create a sample file in azure data lake gen 1. My solution structure is given below: -
Requirements.txt contains the following imports: -
azure-functions
azure-mgmt-resource
azure-mgmt-datalake-store
azure-datalake-store
init.py
import logging, os, sys
import azure.functions as func
import json
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
full_path_to_script = os.path.join(os.path.dirname( __file__ ) + '/Test.py')
logging.info(f"Path: - {full_path_to_script}")
os.system(f"python {full_path_to_script}")
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
Test.py
import json
from azure.datalake.store import core, lib, multithread
directoryId = ''
applicationKey = ''
applicationId = ''
adlsCredentials = lib.auth(tenant_id = directoryId, client_secret = applicationKey, client_id = applicationId)
adlsClient = core.AzureDLFileSystem(adlsCredentials, store_name = '')
with adlsClient.open('stage1/largeFiles/TestFile.json', 'rb') as input_file:
data = json.load(input_file)
with adlsClient.open('stage1/largeFiles/Result.json', 'wb') as responseFile:
responseFile.write(data)
Test.py is failing with an error that no module found azure.datalake.store
Why other required modules are not working for Test.py since it is inside the same directory?
pip freeze output: -
adal==1.2.2
azure-common==1.1.23
azure-datalake-store==0.0.48
azure-functions==1.0.4
azure-mgmt-datalake-nspkg==3.0.1
azure-mgmt-datalake-store==0.5.0
azure-mgmt-nspkg==3.0.2
azure-mgmt-resource==6.0.0
azure-nspkg==3.0.2
certifi==2019.9.11
cffi==1.13.2
chardet==3.0.4
cryptography==2.8
idna==2.8
isodate==0.6.0
msrest==0.6.10
msrestazure==0.6.2
oauthlib==3.1.0
pycparser==2.19
PyJWT==1.7.1
python-dateutil==2.8.1
requests==2.22.0
requests-oauthlib==1.3.0
six==1.13.0
urllib3==1.25.6
Problem
os.system(f"python {full_path_to_script}") from your functions project is causing the issue.
Azure Functions Runtime sets up the environment, along with modifying process level variables like os.path so that your function can load any dependencies you may have. When you create a sub-process like that, not all information will flow through. Additionally, you will face issues with logging -- logs from test.py would not show up properly unless explicitly handled.
Importing works locally because you have all your requirements.txt modules installed and available to test.py. This is not the case in Azure. After remotely building as part of publish, your modules are included as part of your code package published. It's not "installed" globally in the Azure environment per se.
Solution
You shouldn't have to run your script like that. In the example above, you could import your test.py from your __init__.py file, and that should behave like it was called python test.py (at least in the case above). Is there a reason you'd want to do python test.py in a sub-process over importing it?
Here's the official guide on how you'd want to structure your app to import shared code -- https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#folder-structure
Side-Note
I think once you get through the import issue, you may also face problems with adlsClient.open('stage1/largeFiles/TestFile.json', 'rb'). We recommend following the developer guide above to structure your project and using __file__ to get the absolute path (reference).
For example --
import pathlib
with open(pathlib.Path(__file__).parent / 'stage1' / 'largeFiles' /' TestFile.json'):
....
Now, if you really want to make os.system(f"python {full_path_to_script}") work, we have workarounds to the import issue. But, I'd rather not recommend such approach unless you have a really compelling need for it. :)
I'm new to Python. This is my first Ansible module in order to delete the SimpleDB domain from ChaosMonkey deletion.
When tested in my local venv with my Mac OS X, it keeps saying
Module unable to decode valid JSON on stdin. Unable to figure out
what parameters were passed.
Here is the code:
#!/usr/bin/python
# Delete SimpleDB Domain
from ansible.module_utils.basic import *
import boto3
def delete_sdb_domain():
fields = dict(
sdb_domain_name=dict(required=True, type='str')
)
module = AnsibleModule(argument_spec=fields)
client = boto3.client('sdb')
response = client.delete_domain(DomainName='module.params['sdb_domain_name']')
module.exit_json(changed = False, meta = response)
def main():
delete_sdb_domain()
if __name__ == '__main__':
main()
And I'm trying to pass in parameters from this file: /tmp/args.json.
and run the following command to make the local test:
$ python ./delete_sdb_domain.py /tmp/args.json
please note I'm using venv test environment on my Mac.
If you find any syntax error in my module, please also point it out.
This is not how you should test your modules.
AnsibleModule expects to have specific JSON as stdin data.
So the closest thing you can try is:
python ./delete_sdb_domain.py < /tmp/args.json
But I bet you have your json file in wrong format (no ANSIBLE_MODULE_ARGS, etc.).
To debug your modules you can use test-module script from Ansible hacking pack:
./hacking/test-module -m delete_sdb_domain.py -a "sdb_domain_name=zzz"
I want to load a Jupyter Notebook Server Extension within a local directory:
server_ext/
|__ __init__.py
|__ extension.py
extension.py
from notebook.utils import url_path_join
from notebook.base.handlers import IPythonHandler
class HelloWorldHandler(IPythonHandler):
def get(self):
self.finish('Hello, world!')
def load_jupyter_server_extension(nbapp):
"""
nbapp is istance of Jupyter.notebook.notebookapp.NotebookApp
nbapp.web_app is isntance of tornado.web.Application - can register new tornado.web.RequestHandlers
to extend API backend.
"""
nbapp.log.info('My Extension Loaded')
web_app = nbapp.web_app
host_pattern = '.*$'
route_pattern = url_path_join(web_app.settings['base_url'], '/hello')
web_app.add_handlers(host_pattern, [(route_pattern, HelloWorldHandler)])
I run the following command from the directory containing server_ext:
jupyter notebook --NotebookApp.server_extensions="['server_ext.extension']"
But I get the error "No module named extension". Is there something I have to do to get Jupyter/python session to recognize the path to the module?
Figured it out-
it turns out that Jupyter Notebook's call to importlib.import_module sets package=None, which means that relative paths will not work.
As a workaround, the ~/.jupyter/jupyter_notebook_config.py script can be modified to append your local directory to the PYTHONPATH so that the module can be found.
import sys
sys.path.append("C:\\Users\\eric\\server_ext")
c.NotebookApp.server_extensions = [
'extension'
]