I have a Python script (converted from an IPython Notebook) with
def train_mod_inR():
get_ipython().magic(u'R -i myinput')
get_ipython().magic(u'R -o res res <- Rfunction(myinput)')
return res
I am trying to just run the script using Python without any of the magic, so I am using rpy2. Does anyone know any easy way to translate the function above so it just works with standard rpy2. E.g., is it:
def train_mod_inR()
r2py.robjects('-i myinput')
r2py.robjects('-o res res <- Rfunction(myinput)')
return res
In fact, does anyone have a good workaround for this in general? I have a feeling this will require a bit more than what I've shown - is there any command in r2py directly that will allow us to interpret R like is being done by the magic command?
You were pretty close.
An exact translation of what is happening in the magic would be
(NOTE: the object "converter" is defined is the next code block to break down what is happening, just assume that it is just an rpy2 converter for now):
from rpy2.robjects.conversion import localconverter
# Take the Python object known as "myinput", pass it to R
# (going through the rpy2 conversion layer), and make it known to R
# as "myinput".
with localconverter(converter) as cv:
r2py.robjects.globalenv['myinput'] = myinput
# Run the R function known (to R) as "Rfunction", using the object known
# to R as "myinput" as the argument. Make the resulting object known to
# as "res".
r2py.robjects('res <- Rfunction(myinput)')
# Take the R objects "res", pass it to Python (going through the rpy2
# conversion layer), and make it known to Python as "res".
with localconverter(converter) as cv:
res = r2py.robjects.globalenv['res']
The conversion can be customized in the R magic (https://rpy2.github.io/doc/v3.1.x/html/interactive.html#module-rpy2.ipython.rmagic), but assuming that you are using the current default it would be (testing whether numpy or pandas are installed is left as an exercise for the reader):
from rpy2.robjects.conversion import Converter
converter = Converter('ipython conversion')
if has_numpy:
from rpy2.robjects import numpy2ri
converter += numpy2ri.converter
if has_pandas:
from rpy2.robjects import pandas2ri
converter += pandas2ri.converter
Note that conversion can be an expensive step (for example, there is no C-level mapping between R and Python for arrays of strings so they have to be copied over a loop), and depending on how that R result is used elsewhere in the code you may want to go for a lighter conversion (or no conversion).
Additional comment
Now this is when one want exactly replicate what is happening in an "R magic" cell in jupyter. The core part is the execution of the cell, that is the evaluation of the string:
r2py.robjects('res <- Rfunction(myinput)')
The evaluation of that string can be moved to Python, progressively or all at one. In this case only the latter is possible because there is only one function call. The function known to R as "Rfunction" can be mapped to a Python object that is callable:
rfunction = r2py.robjects('Rfunction')
If you are only interested in having res in Python, the creations of symbols myinput and res in R's globalenv might no longer be necessary and the whole can be shortened to:
with localconverter(converter) as cv:
res = rfunction(myinput)
Related
I primarily program in python (using jupyter notebooks) but on occasion need to use an R function. I currently do this by using rpy2 and R magic, which works fine. Now I would like to write a function which will summarize part of my analysis procedure into one wrapper function (so I don't always need to run all of the code cells but can simply execute the function once). As part of this procedure I need to call an R function. I adapted my code to import the R function to python using the rpy2.robjects interface with importr. This works but is extremely slow (more than triple the run time for an already lengthy procedure) which makes this simply not feasible analysiswise. I am assuming this has to do with me accessing R through the high-level interface of rpy2 instead of the low-level interface. I am unsure of how to use the low-level interface within a function call though and would need some help adapting my code.
I've tried looking into the rpy2 documentation but am struggling to understand it.
This is my code for executing the R function call from within python using R magic.
Activating rpy2 R magic
%load_ext rpy2.ipython
Load my required libaries
%%R
library(scran)
Actually call the R function
%%R -i data_mat -i input_groups -o size_factors
size_factors = computeSumFactors(data_mat, clusters=input_groups, min.mean=0.1)
This is my alternative code to import the R function using rpy2 importr.
from rpy2.robjects.packages import importr
scran = importr('scran')
computeSumFactors = scran.computeSumFactors
size_factors = computeSumFactors(data_mat, clusters=input_groups, min_mean=0.1)
For some reason this second approach is orders of magnitude slower.
Any help would be much apreciated.
The only difference between the two that I would see have an influence on the observe execution speed is conversion.
When running in an "R magic" code cell (prefixed with %%R), in your example the result of calling computeSumFactors() is an R object bound to the symbol size_factors in R. In the other case, the result of calling the function computeSumFactors() will go through the conversion system (and there what exactly happens depends on what are the active converters you have) before the result is bound to the Python symbol size_factors.
Conversion can be costly: you should consider trying to deactivate numpy / pandas conversion (the localconverter context manager can be a convenient way to temporarily use minimal conversion for a code block).
In R we can use Rcpp to call a cpp function as the one below:
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
SEXP critcpp(SEXP a, SEXP b){
NumericMatrix X(a);
NumericVector crit(b);
int p = XtX.ncol();
NumericMatrix critstep(p,p);
NumericMatrix deltamin(p,p);
List lst(2);
for (int i = 0; i < (p-1); i++){
for (int j = i+1; j < p; j++){
--some calculations
}
}
lst[0] = critstep;
lst[1] = deltamin;
return lst;
}
I want to do the same thing in python.
I have gone through Boost,SWIG etc but it seems complicated to my newbie Python eyes.
Can the python wizards here kindly point me in the right direction.
I need to call this C++ function from inside a Python function.
Since I think the only real answer is by spending some time in rewriting the function you posted, or by writing a some sort of wrapper for the function (absolutely possible but quite time consuming) I'm answering with a completely different approach...
Without passing by any sort of compiled conversion, a really faster way (from a programming time point of view, not in efficiency) may be directly calling the R interpreter with the module of the function you posted from within python, through the python rpy2 module, as described here. It requires the panda module, to handle the data frames from R.
The module to use (in python) are:
import numpy as np # for handling numerical arrays
import scipy as sp # a good utility
import pandas as pd # for data frames
from rpy2.robjects.packages import importr # for importing your module
import rpy2.robjects as ro # for calling R interpreter from within python
import pandas.rpy.common as com # for storing R data frames in pandas data frames.
In your code you should import your module by calling importr
importr('your-module-with-your-cpp-function')
and you can send directly commands to R by issuing:
ro.r('x = your.function( blah blah )')
x_rpy = ro.r('x')
type(x_rpy)
# => rpy2.robjects.your-object-type
you can store your data in a data frame by:
py_df = com.load_data('variable.name')
and push back a data frame through:
r_df = com.convert_t_r_dataframe(py_df)
ro.globalenv['df'] = r_df
This is for sure a workaround for your question, but it may be considered as a reasonable solution for certain applications, even if I do not suggest it for "production".
The basic question is this: Let's say I was writing R functions which called python via rPython, and I want to integrate this into a package. That's simple---it's irrelevant that the R function wraps around Python, and you proceed as usual. e.g.
# trivial example
# library(rPython)
add <- function(x, y) {
python.assign("x", x)
python.assign("y", y)
python.exec("result = x+y")
result <- python.get("result")
return(result)
}
But what if the python code with R functions require users to import Python libraries first? e.g.
# python code, not R
import numpy as np
print(np.sin(np.deg2rad(90)))
# R function that call Python via rPython
# *this function will not run without first executing `import numpy as np`
print_sin <- function(degree){
python.assign("degree", degree)
python.exec('result = np.sin(np.deg2rad(degree))')
result <- python.get('result')
return(result)
}
If you run this without importing the library numpy, you will get an error.
How do you import a Python library in an R package? How do you comment it with roxygen2?
It appears the R standard is this:
# R function that call Python via rPython
# *this function will not run without first executing `import numpy as np`
print_sin <- function(degree){
python.assign("degree", degree)
python.exec('import numpy as np')
python.exec('result = np.sin(np.deg2rad(degree))')
result <- python.get('result')
return(result)
}
Each time you run an R function, you will import an entire Python library.
As #Spacedman and #DirkEddelbuettel suggest you could add a .onLoad/.onAttach function to your package that calls python.exec to import the modules that will typically always be required by users of your package.
You could also test whether the module has already been imported before importing it, but (a) that gets you into a bit of a regression problem because you need to import sys in order to perform the test, (b) the answers to that question suggest that at least in terms of performance, it shouldn't matter, e.g.
If you want to optimize by not importing things twice, save yourself the hassle because Python already takes care of this.
(although admittedly there is some quibblingdiscussion elsewhere on that page about possible scenarios where there could be a performance cost).
But maybe your concern is stylistic rather than performance-oriented ...
I have a Go program
package main
import (
"crypto/hmac"
"crypto/sha1"
"fmt"
)
func main() {
val := []byte("nJ1m4Cc3")
hasher := hmac.New(sha1.New, val)
fmt.Printf("%x\n", hasher.Sum(nil))
// f7c0aebfb7db2c15f1945a6b7b5286d173df894d
}
And a Python (2.7) program that is attempting to reproduce the Go code (using crypto/hmac)
import hashlib
val = u'nJ1m4Cc3'
hasher = hashlib.new("sha1", val)
print hasher.hexdigest()
# d67c1f445987c52bceb8d6475c30a8b0e9a3365d
Using the hmac module gives me a different result but still not the same as the Go code.
import hmac
val = 'nJ1m4Cc3'
h = hmac.new("sha1", val)
print h.hexdigest()
# d34435851209e463deeeb40cba7b75ef
Why do these print different values when they use the same hash on the same input?
You have to make sure that
the input in both scenarios is equivalent and that
the processing method in both scenarios is equivalent.
In both cases, the input should be the same binary blob. In your Python program you define a unicode object, and you do not take control of its binary representation. Replace the u prefix with a b, and you are fine (this is the explicit way to define a byte sequence in Python 2.7 and 3). This is not the actual problem, but better be explicit here.
The problem is that you apply different methods in your Go and Python implementations.
Given that Python is the reference
In Go, no need to import "crypto/hmac" at all, in Python you just build a SHA1 hash of your data. In Go, the equivalent would be:
package main
import (
"crypto/sha1"
"fmt"
)
func main() {
data := []byte("nJ1m4Cc3")
fmt.Printf("%x", sha1.Sum(data))
}
Test and output:
go run hashit.go
d67c1f445987c52bceb8d6475c30a8b0e9a3365d
This reproduces what your first Python snippet creates.
Edit: I have simplified the Go code a bit, to not make Python look more elegant. Go is quite elegant here, too :-).
Given that Go is the reference
import hmac
import hashlib
data = b'nJ1m4Cc3'
h = hmac.new(key=data, digestmod=hashlib.sha1)
print h.hexdigest()
Test & output:
python hashit.py
f7c0aebfb7db2c15f1945a6b7b5286d173df894d
This reproduces what your Go snippet creates. I am, however, not sure about the cryptographic significance of an HMAC when one does use an empty message.
I have R code that uses RQuantlib library. In order to run it from python I am using RPy2. I know python has its own bindings for quantlib (quantlib-python). I'd like to switch from R to python completely.
Please let me know how I can run the following using quantlib-python
import rpy2.robjects as robjects
robjects.r('library(RQuantLib)')
x = robjects.r('x<-EuropeanOptionImpliedVolatility(type="call", value=11.10, underlying=100,strike=100, dividendYield=0.01, riskFreeRate=0.03,maturity=0.5, volatility=0.4)')
print x
Sample run:
$ python vol.py
Loading required package: Rcpp
Implied Volatility for EuropeanOptionImpliedVolatility is 0.381
You'll need a bit of setup. For convenience, and unless you get name clashes, you better import everything:
from QuantLib import *
then, create the option, which needs an exercise and a payoff:
exercise = EuropeanExercise(Date(3,August,2011))
payoff = PlainVanillaPayoff(Option.Call, 100.0)
option = EuropeanOption(payoff,exercise)
(note that you'll need an exercise date, not a time to maturity.)
Now, whether you want to price it or get its implied volatility, you'll have to setup a Black-Scholes process. There's a bit of machinery involved, since you can't just pass a value, say, of the risk-free rate: you'll need a full curve, so you'll create a flat one and wrap it in a handle. Ditto for dividend yield and vol; the underlying value goes in a quote. (I'm not explaining what all the objects are; comment if you need it.)
S = QuoteHandle(SimpleQuote(100.0))
r = YieldTermStructureHandle(FlatForward(0, TARGET(), 0.03, Actual360()))
q = YieldTermStructureHandle(FlatForward(0, TARGET(), 0.01, Actual360()))
sigma = BlackVolTermStructureHandle(BlackConstantVol(0, TARGET(), 0.20, Actual360()))
process = BlackScholesMertonProcess(S,q,r,sigma)
(the volatility won't actually be used for implied-vol calculation, but you need one anyway.)
Now, for implied volatility you'll call:
option.impliedVolatility(11.10, process)
and for pricing:
engine = AnalyticEuropeanEngine(process)
option.setPricingEngine(engine)
option.NPV()
You might use other features (wrap rates in a quote so you can change them later, etc.) but this should get you started.