This is an extension of this question
I want to install my SAS Magic when I install my SAS Kernel through pip.
It will register if I import the package from sas_kernel.magics import sas_magic
but I want it to be available without needing the import.
I'm using jupyter 4.0.6
Here is a snippet of the code:
from __future__ import print_function
import IPython.core.magic as ipym
from saspy.SASLogLexer import *
import re
import os
#ipym.magics_class
class SASMagic(ipym.Magics):
#ipym.cell_magic
def SAS(self,line,cell):
'''
%%SAS - send the code in the cell to a SAS Server
'''
executable = os.environ.get('SAS_EXECUTABLE', 'sas')
if executable=='sas':
executable='/usr/local/SASHome/SASFoundation/9.4/sas'
e2=executable.split('/')
_path='/'.join(e2[0:e2.index('SASHome')+1])
_version=e2[e2.index('SASFoundation')+1]
import saspy as saspy
self.mva=saspy.SAS_session()
self.mva._startsas(path=_path, version=_version)
res=self.mva.submit(cell,'html')
output=self._clean_output(res['LST'])
log=self._clean_log(res['LOG'])
dis=self._which_display(log,output)
return dis
from IPython import get_ipython
get_ipython().register_magics(SASMagic)
Since your Kernel is derived from MetaKernel, you can register the magics in your Kernel.__init__:
from .sasmagic import SASMagic
class SASKernel(MetaKernel):
def __init__(self, **kwargs):
super(SASKernel, self).__init__(**kwargs)
self.register_magics(SASMagic)
(I'm assuming a little bit, since I can't see your code, but it would look something like this, assuming your SASMagic definition is in a sasmagic.py next to your kernel class definition.
Related
I have a relatively complex ecosystem of applications and libraries that are scheduled to run in my environment.
I am trying to improve my logging and in particular I'd like to write debug information to a logging file, and I'd like that log to contain the logger.debug("string") lines from all the imported libraries I wrote, but not from libraries I import from pypi.
example:
import sys
import numpy
from bs4 import BeautifulSoup
import logging
import mylibrary
import myotherlibrary
logger = logging.getLogger(application_name) # I don't use _ _ name _ _ in all of them, but I can change this line as necessary
so in this case when I set logger level to debug, I'd like to see debug information from the current script, from mylibrary and from myotherlibrary , but not from bs4,numpy, etc.
bonus: Ideally I would like to not have to hardcode every time the name of the libraries, but just have the script "know" it (from naming convention maybe?)
If anyone has any ideas it'd be greatly appreciated!
Python doesn't really have a concept of "libraries I wrote" vs "libraries imported with pypi" - a library is a library unfortunately.
However, depending on how your libraries are set up, you may be able to get a realllly hacky custom logger?
By default, Python libraries installed with pip go to a central location - usually something like /usr/local/lib or %APPDATA% on windows. In contrast, local libraries are usually within the same directory as the calling script. We can use this to our advantage!
The following code demonstrates a kinda proof-of-concept - I've left a few methods needing implementing as an exercise ;)
#CustomLogger.py
import __main__
import logging
import os
#create a custom log class, inheriting the current logger class
class CustomLogger(logging.getLoggerClass()):
custom_lib = False
def __init__(self, name):
#initialise the base logger
super().__init__(name)
#get the directory we are being run from
current_dir = os.path.dirname(__main__.__file__)
permutations = ['/', '.py', '.pyc']
#check if we are a custom library, or if we are one installed via pip etc
self.custom_lib = self.checkExists(current_dir, permutations)
self.propagate = not self.custom_lib
def checkExists(self, current_dir, permutations):
#loop through each permutation and see if a file matching that spec exists
#currently looks for .py/.pyc files and directories
for perm in permutations:
file = os.path.join(current_dir, self.name + perm)
if os.path.exists(file):
return True
return False
def isEnabledFor(self, level):
if self.custom_lib:
return super().isEnabledFor(level)
return False
#the hackiest part :)
#these are two sample overrides that only log if we're a custom
#library (i.e. one we've written, not installed)
#there are a few more methods that I've not implemented, a full
#list is available at https://docs.python.org/3/library/logging.html#logging.Logger
def debug(self, msg, *args, **kwargs):
if self.custom_lib:
return super().debug(msg, args, kwargs)
def info(self, msg, *args, **kwargs):
if self.custom_lib:
return super().info(msg, args, kwargs)
#most important part - also override the logger class
#this means that any calls to logging.getLogger() will use our new subclass
logging.setLoggerClass(CustomLogger)
You could then use it like this:
import CustomLogger #needs importing first so it ensures the logger is setup
import sys
import numpy
from bs4 import BeautifulSoup
import logging
import mylibrary
import myotherlibrary
logger = logging.getLogger(application_name) #returns type CustomLogger
I am looking for a way to add soft requirements to my python module.
My module will use some external pip resources that the user may or may not install depending on their use case. I would like this to be an option rather than a requirement but a big chunk of code will require the external modules. I simply want to return an error if a class is used that does not have the correct module installed.
example:
#tkc_ext.py
try:
from tkcalendar import DateEntry
tkc_imported = True
except:
tkc_imported = False
class tkcNotImportedError(Exception):
def __init__(self):
print("tkcalendar module is not installed.\n please install it to use these widgets")
class cDateEntry(DateEntry):
def __init__(self, master, **kwargs):
if not tkc_import:
raise tkcNotImportedError
##insert code here
I'm not sure how I can disable the cDateEntry class if tkcalendar has not been installed as it will error on the class line due to DateEntry not being imported, and i cant error out before that as it will simply error if the file is imported.
I've a main.py file with a block of code like this:
import urtc
import machine
rtc = urtc.DS3231(machine.I2C(scl=machine.Pin(0), sda=machine.Pin(2)))
from func import * #line 4
Now, func.py file which is imported on line 4 has code something like this:
def current_time():
import urtc
import machine
rtc = urtc.DS3231(machine.I2C(scl=machine.Pin(0), sda=machine.Pin(2)))
return urtc.tuple2seconds(rtc.datetime())
In main.py, I'm already importing urtc and machine and defining rtc. Is it possible to eliminate these 3 lines from function current_time():
import urtc
import machine
rtc = urtc.DS3231(machine.I2C(scl=machine.Pin(0), sda=machine.Pin(2)))
It seems redundant as I already have them in main.py global. How can I use them from main.py global instead of importing them again in function current_time()?
You should pass the urtc.DS3231 instance to the current_time function like so:
def current_time(rtc):
return urtc.tuple2seconds(rtc.datetime())
But you still need to import urtc in func.py so that urtc.tuple2seconds is available.
You should use arguments in your function, this is in fact bad design to do it the way you did.
import urtc
import machine
rtc = urtc.DS3231(machine.I2C(scl=machine.Pin(0), sda=machine.Pin(2)))
from func import *
def current_time(rtc):
return urtc.tuple2seconds(rtc.datetime())
current_time(rtc)
I would suggest that you load the dependencies in func.py (if you are not using them anywhere else in main.py, it is a better practice).
Trying to add a few imports to my IPython profile so that when I open a kernel in the Spyder IDE they're always loaded. Spyder has a Qt interface (I think??), so I (a) checked to make sure I was in the right directory for the profile using the ipython locate command in the terminal (OSX), and (b) placing the following code in my ipython_qtconsole_config.py file:
c.IPythonQtConsoleApp.exec_lines = ["import pandas as pd",
"pd.set_option('io.hdf.default_format', 'table')",
"pd.set_option('mode.chained_assignment','raise')",
"from __future__ import division, print_function"]
But when I open a new window and type pd.__version__ I get the NameError: name 'pd' is not defined error.
Edit: I don't have any problems if I run ipython qtconsole from the Terminal.
Suggestions?
Thanks!
Whether Spyder uses a QT interface or not shouldn't be related to which of the IPython config files you want to modify. The one you chose to modify, ipython_qtconsole_config.py is the configuration file that is loaded when you launch IPython's QT console, such as with the command line command
user#system:~$ ipython qtconsole
(I needed to update pyzmq for this to work.)
If Spyder maintains a running IPython kernel and merely manages how to display that for you, then Spyder is probably just maintaining a regular IPython session, in which case you want your configuration settings to go into the file ipython_config.py at the same directory where you found ipython_qtconsole_config.py.
I manage this slightly differently than you do. Inside of ipython_config.py the top few lines for me look like this:
# Configuration file for ipython.
from os.path import join as pjoin
from IPython.utils.path import get_ipython_dir
c = get_config()
c.InteractiveShellApp.exec_files = [
pjoin(get_ipython_dir(), "profile_default", "launch.py")
]
What this does is to obtain the IPython configuration directory for me, add on the profile_default subdirectory, and then add on the name launch.py which is a file that I created just to hold anything I want to be executed/loaded upon startup.
For example, here's the first bit from my file launch.py:
"""
IPython launch script
Author: Ely M. Spears
"""
import re
import os
import abc
import sys
import mock
import time
import types
import pandas
import inspect
import cPickle
import unittest
import operator
import warnings
import datetime
import dateutil
import calendar
import copy_reg
import itertools
import contextlib
import collections
import numpy as np
import scipy as sp
import scipy.stats as st
import scipy.weave as weave
import multiprocessing as mp
from IPython.core.magic import (
Magics,
register_line_magic,
register_cell_magic,
register_line_cell_magic
)
from dateutil.relativedelta import relativedelta as drr
###########################
# Pickle/Unpickle methods #
###########################
# See explanation at:
# < http://bytes.com/topic/python/answers/
# 552476-why-cant-you-pickle-instancemethods >
def _pickle_method(method):
func_name = method.im_func.__name__
obj = method.im_self
cls = method.im_class
return _unpickle_method, (func_name, obj, cls)
def _unpickle_method(func_name, obj, cls):
for cls in cls.mro():
try:
func = cls.__dict__[func_name]
except KeyError:
pass
else:
break
return func.__get__(obj, cls)
copy_reg.pickle(types.MethodType, _pickle_method, _unpickle_method)
#############
# Utilities #
#############
def interface_methods(*methods):
"""
Class decorator that can decorate an abstract base class with method names
that must be checked in order for isinstance or issubclass to return True.
"""
def decorator(Base):
def __subclasshook__(Class, Subclass):
if Class is Base:
all_ancestor_attrs = [ancestor_class.__dict__.keys()
for ancestor_class in Subclass.__mro__]
if all(method in all_ancestor_attrs for method in methods):
return True
return NotImplemented
Base.__subclasshook__ = classmethod(__subclasshook__)
return Base
def interface(*attributes):
"""
Class decorator checking for any kind of attributes, not just methods.
Usage:
#interface(('foo', 'bar', 'baz))
class Blah
pass
Now, new classes will be treated as if they are subclasses of Blah, and
instances will be treated instances of Blah, provided they possess the
attributes 'foo', 'bar', and 'baz'.
"""
def decorator(Base):
def checker(Other):
return all(hasattr(Other, a) for a in attributes)
def __subclasshook__(cls, Other):
if checker(Other):
return True
return NotImplemented
def __instancecheck__(cls, Other):
return checker(Other)
Base.__metaclass__.__subclasshook__ = classmethod(__subclasshook__)
Base.__metaclass__.__instancecheck__ = classmethod(__instancecheck__)
return Base
return decorator
There's a lot more, probably dozens of helper functions, snippets of code I've thought are cool and just want to play with, etc. I also define some randomly generated toy data sets, like NumPy arrays and Pandas DataFrames, so that when I want to poke around with some one-off Pandas syntax or something, some toy data is always right there.
The other upside is that this factors out the custom imports, function definitions, etc. that I want loaded, so if I want the same things loaded for the notebook and/or the qt console, I can just add the same bit of code to exec the file launch.py and I can make changes in only launch.py without having to manually migrate them to each of the three configuration files.
I also uncomment a few of the different settings, especially for plain IPython and for the notebook, so the config files are meaningfully different from each other, just not based on what modules I want imported on start up.
I'm trying to write my first module in Ansible, which is essentially a wrapper around another module. Here is my module:
#!/usr/bin/python
import ansible.runner
import sys
def main():
module.exit_json(changed=False)
from ansible.module_utils.basic import *
main()
and here is the error it gives me (stripped from 'msg'):
ImportError: No module named ansible.runner
I am on ubuntu and installed ansible with aptitude, version is 1.9.1
Any ideas?
Modules have to essentially be standalone. The boilerplate gets injected at runtime (the text of the boilerplate replaces the import at the bottom), and the combined text of the module + boilerplate is squirted to the remote machine and run there. As such, you can't import things from ansible core like the runner (unless you install ansible on the remote machine- don't be that guy). "module" is one of the items that you have to create from stuff defined in the boilerplate. Here's a sample module skeleton I wrote:
#! /usr/bin/python
import json
def main():
module = AnsibleModule(
argument_spec = dict(
state = dict(default='present', choices=['present', 'absent'])
),
supports_check_mode = True
)
p = module.params
changed = False
state = p['state']
if not module.check_mode:
# do stuff
pass
#module.fail_json(msg='it broke')
module.exit_json(changed=changed)
from ansible.module_utils.basic import *
main()
I just checked a module I wrote a while back and I don't have such an import line. The only import I have is from ansible.module_utils.basic import *. The module object I create myself in main:
module = AnsibleModule(
argument_spec=dict(
paramA=dict(required=True),
paramB=dict(required=False),
paramC=dict(required=False),
),
add_file_common_args=True,
supports_check_mode=True
)