Trying to add a few imports to my IPython profile so that when I open a kernel in the Spyder IDE they're always loaded. Spyder has a Qt interface (I think??), so I (a) checked to make sure I was in the right directory for the profile using the ipython locate command in the terminal (OSX), and (b) placing the following code in my ipython_qtconsole_config.py file:
c.IPythonQtConsoleApp.exec_lines = ["import pandas as pd",
"pd.set_option('io.hdf.default_format', 'table')",
"pd.set_option('mode.chained_assignment','raise')",
"from __future__ import division, print_function"]
But when I open a new window and type pd.__version__ I get the NameError: name 'pd' is not defined error.
Edit: I don't have any problems if I run ipython qtconsole from the Terminal.
Suggestions?
Thanks!
Whether Spyder uses a QT interface or not shouldn't be related to which of the IPython config files you want to modify. The one you chose to modify, ipython_qtconsole_config.py is the configuration file that is loaded when you launch IPython's QT console, such as with the command line command
user#system:~$ ipython qtconsole
(I needed to update pyzmq for this to work.)
If Spyder maintains a running IPython kernel and merely manages how to display that for you, then Spyder is probably just maintaining a regular IPython session, in which case you want your configuration settings to go into the file ipython_config.py at the same directory where you found ipython_qtconsole_config.py.
I manage this slightly differently than you do. Inside of ipython_config.py the top few lines for me look like this:
# Configuration file for ipython.
from os.path import join as pjoin
from IPython.utils.path import get_ipython_dir
c = get_config()
c.InteractiveShellApp.exec_files = [
pjoin(get_ipython_dir(), "profile_default", "launch.py")
]
What this does is to obtain the IPython configuration directory for me, add on the profile_default subdirectory, and then add on the name launch.py which is a file that I created just to hold anything I want to be executed/loaded upon startup.
For example, here's the first bit from my file launch.py:
"""
IPython launch script
Author: Ely M. Spears
"""
import re
import os
import abc
import sys
import mock
import time
import types
import pandas
import inspect
import cPickle
import unittest
import operator
import warnings
import datetime
import dateutil
import calendar
import copy_reg
import itertools
import contextlib
import collections
import numpy as np
import scipy as sp
import scipy.stats as st
import scipy.weave as weave
import multiprocessing as mp
from IPython.core.magic import (
Magics,
register_line_magic,
register_cell_magic,
register_line_cell_magic
)
from dateutil.relativedelta import relativedelta as drr
###########################
# Pickle/Unpickle methods #
###########################
# See explanation at:
# < http://bytes.com/topic/python/answers/
# 552476-why-cant-you-pickle-instancemethods >
def _pickle_method(method):
func_name = method.im_func.__name__
obj = method.im_self
cls = method.im_class
return _unpickle_method, (func_name, obj, cls)
def _unpickle_method(func_name, obj, cls):
for cls in cls.mro():
try:
func = cls.__dict__[func_name]
except KeyError:
pass
else:
break
return func.__get__(obj, cls)
copy_reg.pickle(types.MethodType, _pickle_method, _unpickle_method)
#############
# Utilities #
#############
def interface_methods(*methods):
"""
Class decorator that can decorate an abstract base class with method names
that must be checked in order for isinstance or issubclass to return True.
"""
def decorator(Base):
def __subclasshook__(Class, Subclass):
if Class is Base:
all_ancestor_attrs = [ancestor_class.__dict__.keys()
for ancestor_class in Subclass.__mro__]
if all(method in all_ancestor_attrs for method in methods):
return True
return NotImplemented
Base.__subclasshook__ = classmethod(__subclasshook__)
return Base
def interface(*attributes):
"""
Class decorator checking for any kind of attributes, not just methods.
Usage:
#interface(('foo', 'bar', 'baz))
class Blah
pass
Now, new classes will be treated as if they are subclasses of Blah, and
instances will be treated instances of Blah, provided they possess the
attributes 'foo', 'bar', and 'baz'.
"""
def decorator(Base):
def checker(Other):
return all(hasattr(Other, a) for a in attributes)
def __subclasshook__(cls, Other):
if checker(Other):
return True
return NotImplemented
def __instancecheck__(cls, Other):
return checker(Other)
Base.__metaclass__.__subclasshook__ = classmethod(__subclasshook__)
Base.__metaclass__.__instancecheck__ = classmethod(__instancecheck__)
return Base
return decorator
There's a lot more, probably dozens of helper functions, snippets of code I've thought are cool and just want to play with, etc. I also define some randomly generated toy data sets, like NumPy arrays and Pandas DataFrames, so that when I want to poke around with some one-off Pandas syntax or something, some toy data is always right there.
The other upside is that this factors out the custom imports, function definitions, etc. that I want loaded, so if I want the same things loaded for the notebook and/or the qt console, I can just add the same bit of code to exec the file launch.py and I can make changes in only launch.py without having to manually migrate them to each of the three configuration files.
I also uncomment a few of the different settings, especially for plain IPython and for the notebook, so the config files are meaningfully different from each other, just not based on what modules I want imported on start up.
Related
I have a relatively complex ecosystem of applications and libraries that are scheduled to run in my environment.
I am trying to improve my logging and in particular I'd like to write debug information to a logging file, and I'd like that log to contain the logger.debug("string") lines from all the imported libraries I wrote, but not from libraries I import from pypi.
example:
import sys
import numpy
from bs4 import BeautifulSoup
import logging
import mylibrary
import myotherlibrary
logger = logging.getLogger(application_name) # I don't use _ _ name _ _ in all of them, but I can change this line as necessary
so in this case when I set logger level to debug, I'd like to see debug information from the current script, from mylibrary and from myotherlibrary , but not from bs4,numpy, etc.
bonus: Ideally I would like to not have to hardcode every time the name of the libraries, but just have the script "know" it (from naming convention maybe?)
If anyone has any ideas it'd be greatly appreciated!
Python doesn't really have a concept of "libraries I wrote" vs "libraries imported with pypi" - a library is a library unfortunately.
However, depending on how your libraries are set up, you may be able to get a realllly hacky custom logger?
By default, Python libraries installed with pip go to a central location - usually something like /usr/local/lib or %APPDATA% on windows. In contrast, local libraries are usually within the same directory as the calling script. We can use this to our advantage!
The following code demonstrates a kinda proof-of-concept - I've left a few methods needing implementing as an exercise ;)
#CustomLogger.py
import __main__
import logging
import os
#create a custom log class, inheriting the current logger class
class CustomLogger(logging.getLoggerClass()):
custom_lib = False
def __init__(self, name):
#initialise the base logger
super().__init__(name)
#get the directory we are being run from
current_dir = os.path.dirname(__main__.__file__)
permutations = ['/', '.py', '.pyc']
#check if we are a custom library, or if we are one installed via pip etc
self.custom_lib = self.checkExists(current_dir, permutations)
self.propagate = not self.custom_lib
def checkExists(self, current_dir, permutations):
#loop through each permutation and see if a file matching that spec exists
#currently looks for .py/.pyc files and directories
for perm in permutations:
file = os.path.join(current_dir, self.name + perm)
if os.path.exists(file):
return True
return False
def isEnabledFor(self, level):
if self.custom_lib:
return super().isEnabledFor(level)
return False
#the hackiest part :)
#these are two sample overrides that only log if we're a custom
#library (i.e. one we've written, not installed)
#there are a few more methods that I've not implemented, a full
#list is available at https://docs.python.org/3/library/logging.html#logging.Logger
def debug(self, msg, *args, **kwargs):
if self.custom_lib:
return super().debug(msg, args, kwargs)
def info(self, msg, *args, **kwargs):
if self.custom_lib:
return super().info(msg, args, kwargs)
#most important part - also override the logger class
#this means that any calls to logging.getLogger() will use our new subclass
logging.setLoggerClass(CustomLogger)
You could then use it like this:
import CustomLogger #needs importing first so it ensures the logger is setup
import sys
import numpy
from bs4 import BeautifulSoup
import logging
import mylibrary
import myotherlibrary
logger = logging.getLogger(application_name) #returns type CustomLogger
I want to build a well-modularized python project, where all alternative modules should be registed and acessed via a function named xxx_builder.
Taking data class as an example:
register.py:
def register(key, module, module_dict):
"""Register and maintain the data classes
"""
if key in module_dict:
logger.warning(
'Key {} is already pre-defined, overwritten.'.format(key))
module_dict[key] = module
data_dict = {}
def register_data(key, module):
register(key, module, data_dict)
data.py:
from register import register_data
import ABCDEF
class MyData:
"""An alternative data class
"""
pass
def call_my_data(data_type):
if data_type == 'mydata'
return MyData
register_data('mydata', call_my_data)
builder.py:
import register
def get_data(type):
"""Obtain the corresponding data class
"""
for func in register.data_dict.values():
data = func(type)
if data is not None:
return data
main.py:
from data import MyData
from builder import get_data
if __name__ == '__main__':
data_type = 'mydata'
data = get_data(type=data_type)
My problem
In main.py, to register MyData class into register.data_dict before calling the function get_data, I need to import data.py in advance to execute register_data('mydata', call_my_data).
It's okay when the project is small, and all the data-related classes are placed according to some rules (e.g. all data-related class should be placed under the directory data) so that I can import them in advance.
However, this registeration mechanism means that all data-related classes will be imported, and I need to install all packages even if I won't use it actually. For example, when the indicator data_type in main.py is not mydata I still need to install ABCDEF package for the class MyData.
So is there any good idea to avoid importing all the packages?
Python's packaging tools come with a solution for this: entry points. There's even a tutorial about how to use entry points for plugins (which seems like what you're doing) (in conjunction with this Setuptools tutorial).
IOW, something like this (nb. untested), if you have a plugin package that has defined
[options.entry_points]
myapp.data_class =
someplugindata = my_plugin.data_classes:SomePluginData
in setup.cfg (or pypackage.toml or setup.py, with their respective syntaxes), you could register all of these plugin classes (here shown with an example with a locally registered class too).
from importlib.metadata import entry_points
data_class_registry = {}
def register(key):
def decorator(func):
data_class_registry[key] = func
return func
return decorator
#register("mydata")
class MyData:
...
def register_from_entrypoints():
for entrypoint in entry_points(group="myapp.data_class"):
register(entrypoint.name)(entrypoint.load())
def get_constructor(type):
return data_class_registry[type]
def main():
register_from_entrypoints()
get_constructor("mydata")(...)
get_constructor("someplugindata")(...)
I have config data that should be loaded before another code (because another code use it).
So, for now I see only way to do this is to call the function at the top before rest imports:
from Init.Loaders.InitPreLoader import InitPreLoader
# this is my config loader
InitPreLoader.load()
from World.WorldManager import WorldManager
from Init.Loaders.InitLoader import InitLoader
from Init.Registry.InitRegistry import InitRegistry
from Utils.Debug import Logger
# ...
if __name__ == '__main__':
# ...
InitLoader.load()
Does it possible to do this in more elegant way and avoid to violate pep8 ?
P.S. If I need to share more code please let me know
UPD: All my classes declared in separate files
This is PreLoader:
from Typings.Abstract.AbstractLoader import AbstractLoader
from Init.Registry.InitRegistry import InitRegistry
from Config.Init.configs import main_config
class InitPreLoader(AbstractLoader):
#staticmethod
def load(**kwargs):
InitRegistry.main_config = main_config
This is Registry (where I store all my initialized data):
from Typings.Abstract.AbstractRegistry import AbstractRegistry
class InitRegistry(AbstractRegistry):
main_config = None
login_server = None
world_server = None
world_observer = None
identifier_region_map = None
region_octree_map = None
Parent of all classes (except AbstractRegistry) is AbstractBase class (it contains mixin):
from abc import ABC
from Config.Mixins.ConfigurableMixin import ConfigurableMixin
class AbstractBase(ConfigurableMixin, ABC):
pass
This mixin works with main_config from InitRegistry.
Also, after PreLoader's load was called, I load rest data with my InitLoader.load() (see first code snapshot):
from Typings.Abstract.AbstractLoader import AbstractLoader
from Init.Registry.InitRegistry import InitRegistry
from Server.Init.servers import login_server, world_server
from World.Observer.Init.observers import world_observer
from World.Region.Init.regions import identifier_region_map, region_octree_map
class InitLoader(AbstractLoader):
#staticmethod
def load(**kwargs):
InitRegistry.login_server = login_server
InitRegistry.world_server = world_server
InitRegistry.world_observer = world_observer
InitRegistry.identifier_region_map = identifier_region_map
InitRegistry.region_octree_map = region_octree_map
Well, for now I have found solution: I moved from Init.Loaders.InitPreLoader import InitPreLoader to separate file and called InitPreLoader.load() there. But I not like this solution, because my PyCharm IDE highlights it as unused import:
import Init.Init.preloader
from World.WorldManager import WorldManager
# ...
Maybe it is possible to improve this solution ? Or maybe another (more elegant) solition exists ?
Is there a way to enable a renderer except for calling alt.renderers.enable('mimebundle') in code? So if the user imports altair she doesn't have to perform any additional actions?
For example, in plotly you can set an environment variable PLOTLY_RENDERER=plotly_mimetype. Is there something similar in altair?
No, Altair does not currently have any mechanism to specify a renderer aside from calling alt.renderers.enable.
But if you are using Jupyter, you could provide an IPython startup script that does this; for example, you can create a file at the path ~/.ipython/profile_default/startup/start.py with the following contents:
import altair
altair.renderers.enable('notebook')
and this will be executed at the start of any Jupyter/IPython session.
If you don't wish to import Altair in every session, you could instead define in this file a Python import hook that will execute custom code the first time Altair is imported. For example, it might look something like this:
import imp
import os
import sys
class _AltairImportHook(object):
def find_module(self, fullname, path=None):
if fullname != 'altair':
return None
self.module_info = imp.find_module(fullname, path)
return self
def load_module(self, fullname):
"""Loads Altair normally and runs pre-initialization code."""
previously_loaded = fullname in sys.modules
altair = imp.load_module(fullname, *self.module_info)
if not previously_loaded:
try:
altair.renderers.enable('notebook')
except:
pass
return altair
sys.meta_path = [_AltairImportHook()] + sys.meta_path
Either it's lack of sleep but I feel silly that I can't get this. I have a plugin, I see it get loaded but I can't instantiate it in my main file:
from transformers.FOMIBaseClass import find_plugins, register
find_plugins()
Here's my FOMIBaseClass:
from PluginBase import MountPoint
import sys
import os
class FOMIBaseClass(object):
__metaclass__ = MountPoint
def __init__(self):
pass
def init_plugins(self):
pass
def find_plugins():
plugin_dir = os.path.dirname(os.path.realpath(__file__))
plugin_files = [x[:-3] for x in os.listdir(plugin_dir) if x.endswith("Transformer.py")]
sys.path.insert(0, plugin_dir)
for plugin in plugin_files:
mod = __import__(plugin)
Here's my MountPoint:
class MountPoint(type):
def __init__(cls,name,bases,attrs):
if not hasattr(cls,'plugins'):
cls.plugins = []
else:
cls.plugins.append(cls)
I see it being loaded:
# /Users/carlos/Desktop/ws_working_folder/python/transformers/SctyDistTransformer.pyc matches /Users/carlos/Desktop/ws_working_folder/python/transformers/SctyDistTransformer.py
import SctyDistTransformer # precompiled from /Users/carlos/Desktop/ws_working_folder/python/transformers/SctyDistTransformer.pyc
But, for the life of me, I can't instantiate the 'SctyDistTransformer' module from the main file. I know I'm missing something trivial. Basically, I want to employ a class loading plugin.
To dymically load Python modules from arbitrary folders use imp module:
http://docs.python.org/library/imp.html
Specifically the code should look like:
mod = imp.load_source("MyModule", "MyModule.py")
clz = getattr(mod, "MyClassName")
Also if you are building serious plug-in architecture I recommend using Python eggs and entry points:
http://wiki.pylonshq.com/display/pylonscookbook/Using+Entry+Points+to+Write+Plugins
https://github.com/miohtama/vvv/blob/master/vvv/main.py#L104