Calculating EuropeanOptionImpliedVolatility in quantlib-python - python

I have R code that uses RQuantlib library. In order to run it from python I am using RPy2. I know python has its own bindings for quantlib (quantlib-python). I'd like to switch from R to python completely.
Please let me know how I can run the following using quantlib-python
import rpy2.robjects as robjects
robjects.r('library(RQuantLib)')
x = robjects.r('x<-EuropeanOptionImpliedVolatility(type="call", value=11.10, underlying=100,strike=100, dividendYield=0.01, riskFreeRate=0.03,maturity=0.5, volatility=0.4)')
print x
Sample run:
$ python vol.py
Loading required package: Rcpp
Implied Volatility for EuropeanOptionImpliedVolatility is 0.381

You'll need a bit of setup. For convenience, and unless you get name clashes, you better import everything:
from QuantLib import *
then, create the option, which needs an exercise and a payoff:
exercise = EuropeanExercise(Date(3,August,2011))
payoff = PlainVanillaPayoff(Option.Call, 100.0)
option = EuropeanOption(payoff,exercise)
(note that you'll need an exercise date, not a time to maturity.)
Now, whether you want to price it or get its implied volatility, you'll have to setup a Black-Scholes process. There's a bit of machinery involved, since you can't just pass a value, say, of the risk-free rate: you'll need a full curve, so you'll create a flat one and wrap it in a handle. Ditto for dividend yield and vol; the underlying value goes in a quote. (I'm not explaining what all the objects are; comment if you need it.)
S = QuoteHandle(SimpleQuote(100.0))
r = YieldTermStructureHandle(FlatForward(0, TARGET(), 0.03, Actual360()))
q = YieldTermStructureHandle(FlatForward(0, TARGET(), 0.01, Actual360()))
sigma = BlackVolTermStructureHandle(BlackConstantVol(0, TARGET(), 0.20, Actual360()))
process = BlackScholesMertonProcess(S,q,r,sigma)
(the volatility won't actually be used for implied-vol calculation, but you need one anyway.)
Now, for implied volatility you'll call:
option.impliedVolatility(11.10, process)
and for pricing:
engine = AnalyticEuropeanEngine(process)
option.setPricingEngine(engine)
option.NPV()
You might use other features (wrap rates in a quote so you can change them later, etc.) but this should get you started.

Related

Audiomath TypeError - MakeRise, MakeFade, MakeHannWindow

I'm using audiomath under Python 3.7 on Windows 10.
I have been using audiomath to standardize my audio files for EEG analyses, it has been very useful for all parameters except this one, I keep stuck when trying to make it create fade-ins, fade-outs or HannWindows.
I've run the same code on other machines with other versions of Python and Numpy and I still get the same error.
from audiomath import Sound, MakeRise
import numpy
sound01 = Sound('mySample.wav')
soundFadedIn = sound01.MakeHannWindow(5)
soundFadedIn.Play()
Log Error
As pointed out by #WarrenWeckesser, this was a bug in audiomath, which has been fixed in audiomath version 1.16.0+
Note that MakeHannWindow only returns the Hann weighting itself (with duration and sampling-frequency matched to sound01). It does not return the sound multiplied by the weighting as you seem to have assumed. What you seem to be trying to do may be better accomplished using the .Fade() method (which was also affected by the same bug).
With a little modification, the way you did it is one way to do it (it always gives you a symmetric fade-in and -out, optionally specifying the duration of the plateau in the middle, in seconds):
from audiomath import Sound
sound01 = Sound('mySample.wav')
soundFadedInAndOut = sound01 * sound01.MakeHannWindow(5) # note the multiplication
Or here's another, where you specify the duration of the rising and falling sections explicitly and separately, instead (it doesn't have to be symmetric, and either of the two durations can be 0):
from audiomath import Sound
sound01 = Sound('mySample.wav')
soundFadedInAndOut = sound01.Copy().Fade(risetime=0.5, falltime=0.5, hann=True)
Finally, if for some reason you're unable or unwilling to upgrade audiomath to 1.16, a workaround for the bug you're reporting might be to use the Shoulder() function from audiomath.Signal to generate your windowing function:
import audiomath as am, numpy as np
x = am.Sound('mySample.wav')
endFadeIn, startFadeOut = 0.5, x.duration-0.5
t = np.linspace(0, x.duration, x.nSamples) # in seconds
window = am.Signal.Shoulder(t, [0, endFadeIn, startFadeOut, x.duration]) # it's a numpy array, not a Sound
faded = x * window # but you can still multiply a Sound by it
faded.Play()

Translate from get_ipython().magic(u'R ...') to simple r2py commands

I have a Python script (converted from an IPython Notebook) with
def train_mod_inR():
get_ipython().magic(u'R -i myinput')
get_ipython().magic(u'R -o res res <- Rfunction(myinput)')
return res
I am trying to just run the script using Python without any of the magic, so I am using rpy2. Does anyone know any easy way to translate the function above so it just works with standard rpy2. E.g., is it:
def train_mod_inR()
r2py.robjects('-i myinput')
r2py.robjects('-o res res <- Rfunction(myinput)')
return res
In fact, does anyone have a good workaround for this in general? I have a feeling this will require a bit more than what I've shown - is there any command in r2py directly that will allow us to interpret R like is being done by the magic command?
You were pretty close.
An exact translation of what is happening in the magic would be
(NOTE: the object "converter" is defined is the next code block to break down what is happening, just assume that it is just an rpy2 converter for now):
from rpy2.robjects.conversion import localconverter
# Take the Python object known as "myinput", pass it to R
# (going through the rpy2 conversion layer), and make it known to R
# as "myinput".
with localconverter(converter) as cv:
r2py.robjects.globalenv['myinput'] = myinput
# Run the R function known (to R) as "Rfunction", using the object known
# to R as "myinput" as the argument. Make the resulting object known to
# as "res".
r2py.robjects('res <- Rfunction(myinput)')
# Take the R objects "res", pass it to Python (going through the rpy2
# conversion layer), and make it known to Python as "res".
with localconverter(converter) as cv:
res = r2py.robjects.globalenv['res']
The conversion can be customized in the R magic (https://rpy2.github.io/doc/v3.1.x/html/interactive.html#module-rpy2.ipython.rmagic), but assuming that you are using the current default it would be (testing whether numpy or pandas are installed is left as an exercise for the reader):
from rpy2.robjects.conversion import Converter
converter = Converter('ipython conversion')
if has_numpy:
from rpy2.robjects import numpy2ri
converter += numpy2ri.converter
if has_pandas:
from rpy2.robjects import pandas2ri
converter += pandas2ri.converter
Note that conversion can be an expensive step (for example, there is no C-level mapping between R and Python for arrays of strings so they have to be copied over a loop), and depending on how that R result is used elsewhere in the code you may want to go for a lighter conversion (or no conversion).
Additional comment
Now this is when one want exactly replicate what is happening in an "R magic" cell in jupyter. The core part is the execution of the cell, that is the evaluation of the string:
r2py.robjects('res <- Rfunction(myinput)')
The evaluation of that string can be moved to Python, progressively or all at one. In this case only the latter is possible because there is only one function call. The function known to R as "Rfunction" can be mapped to a Python object that is callable:
rfunction = r2py.robjects('Rfunction')
If you are only interested in having res in Python, the creations of symbols myinput and res in R's globalenv might no longer be necessary and the whole can be shortened to:
with localconverter(converter) as cv:
res = rfunction(myinput)

Improve Runtime for calling R function in python function using rpy2

I primarily program in python (using jupyter notebooks) but on occasion need to use an R function. I currently do this by using rpy2 and R magic, which works fine. Now I would like to write a function which will summarize part of my analysis procedure into one wrapper function (so I don't always need to run all of the code cells but can simply execute the function once). As part of this procedure I need to call an R function. I adapted my code to import the R function to python using the rpy2.robjects interface with importr. This works but is extremely slow (more than triple the run time for an already lengthy procedure) which makes this simply not feasible analysiswise. I am assuming this has to do with me accessing R through the high-level interface of rpy2 instead of the low-level interface. I am unsure of how to use the low-level interface within a function call though and would need some help adapting my code.
I've tried looking into the rpy2 documentation but am struggling to understand it.
This is my code for executing the R function call from within python using R magic.
Activating rpy2 R magic
%load_ext rpy2.ipython
Load my required libaries
%%R
library(scran)
Actually call the R function
%%R -i data_mat -i input_groups -o size_factors
size_factors = computeSumFactors(data_mat, clusters=input_groups, min.mean=0.1)
This is my alternative code to import the R function using rpy2 importr.
from rpy2.robjects.packages import importr
scran = importr('scran')
computeSumFactors = scran.computeSumFactors
size_factors = computeSumFactors(data_mat, clusters=input_groups, min_mean=0.1)
For some reason this second approach is orders of magnitude slower.
Any help would be much apreciated.
The only difference between the two that I would see have an influence on the observe execution speed is conversion.
When running in an "R magic" code cell (prefixed with %%R), in your example the result of calling computeSumFactors() is an R object bound to the symbol size_factors in R. In the other case, the result of calling the function computeSumFactors() will go through the conversion system (and there what exactly happens depends on what are the active converters you have) before the result is bound to the Python symbol size_factors.
Conversion can be costly: you should consider trying to deactivate numpy / pandas conversion (the localconverter context manager can be a convenient way to temporarily use minimal conversion for a code block).

Python unit test advice

Can I get some advice on writing a unit test for the following piece of code?
%python
import sys
import json
sys.argv = []
sys.argv.append('{"product1":{"brand":"x","type":"y"}}')
sys.argv.append('{"product1":{"brand":"z","type":"a"}}')
products = sys.argv
yy= {}
my_products = []
for n, i in enumerate(products[:]):
xx = json.loads(i)
for j in xx.keys():
yy["brand"] = xx[j]['brand']
yy["type"] = xx[j]["type"]
my_products.append(yy)
print my_products
As it stands there aren't any units to test!!!
A test might consist of:
packaging your program in a script
invoking your program from python unit test as a subprocess
piping the output of your command process to a buffer
asserting the buffer is what you except it to be
While the above would technically allow you to have an automated test on your code it comes with a lot of burden:
- multi processing
- weak assertions by not having types
- coarse interaction (have to invoke a script, can't just assert on the brand/type logic
One way to address those issues could be to package your code into smaller units, ie create a method to encapsulate:
for j in xx.keys():
yy["brand"] = xx[j]['brand']
yy["type"] = xx[j]["type"]
my_products.append(yy)
Import it, exercise it and assert on its output. Then there might be something to map the loading and application of xx.keys() loop to an array (which you could also encapsulate as a function).
And then there could be the highest level taking in args and composing the product mapper loader transformer. And since your code will be thoroughly unit tested at this point, you may get away with not having a test for your top level script?

Cache a constant value

I am developing an application for color blind people to enable them smoothly surf the Internet. I have a set of colors, lets say A , which consists of all the colors seen by a color blind person. Set A is calculated using a big calculation involving millions of colors. Set A is independent of inputs taken in my application i.e set A is like a 'constant' to me (just like 'pi' in mathematics). Now I want to store set A so that whenever I run my application, it is available without any added computational cost i.e i don't have to calculate A every time I run my application.
My Try:
I think this can be done by building a class having one constant but can it be done without creating any special class for just a constant?
I am using Python!
No need for a class. You want to store the calculated values on disk and load them back again on startup: for that you will want to look into the shelve or pickle libraries.
Yes, you can certainly do this with Python
If your constant was just a number -- say, you had just discovered tau -- then you would just declare it in a module, and import that module in all of your other source files:
constants.py:
# Define my new super-useful number
TAU = 6.28318530718
everywhere else:
from constants import TAU # Look, no calculations!
Expanding a bit, if you had a more complicated structure, like a dictionary, that took you a long time to compute, then you could just declare that in your module instead:
constants.py:
# Verified results of the national survey
PEPSI_CHALLENGE = {
'Pepsi': 0.57,
'Coke': 0.43,
}
And you can do this for more and more complicated data. The problem, eventually, is that just writing your constants module gets harder and harder, the more complex your data is, and it can be especially hard to update if you occasionally recompute the value you want to cache. In that case, you want to look at pickling the data, possibly as the final step of a python script which calculates it, and then load that data in a module that you import.
To do that, import pickle, and dump a single object out to a disk file:
recalculate.py:
# Here is the script that computes a small value from the hugely complicated domain:
import random
from itertools import groupby
import pickle
# Collect all of the random numbers
random_numbers = [random.randint(0,10) for r in xrange(1000000)]
# TODO: Check this -- this should definitely be 7
most_popular = max(groupby(sorted(random_numbers)),
key=lambda(x, v):(len(list(v)),-L.index(x)))[0]
# Now save the most common random number to disk, using pickle
# Almost any object is picklable like this, but check the docs for the exact details
pickle.dump(most_popular, open('data_cache','w'))
Now, in your constants file, you can simply read the pickled data from the file on disk, and have it available without recalculating it:
constants.py:
import pickle
most_popular = pickle.load(open('data_cache'))
everywhere else:
from constants import most_popular

Categories