fitdistr in rpy2 - python

I've a 1D list of data, that I want to fit into a distribution using either least squares or maximum likelihood, as presented here, but I want to do it from python instead of the R interactive shell.
I got rpy2 installed, and would like to use the fitdistr function from within the interactive ipython shell, as I have imported the data in a list.
Where is this function, and how do I use it?

The function is in the R package MASS
from rpy2.robjects.packages import importr
MASS = importr('MASS')
# the function is now at MASS.fitdistr

Related

Parametric sweep in OpenModelica using OMPython

I am trying to run a parametric sweep in OpenModelica using OMPython. Let's assume that I have a Modelica model my_model.mo belonging to the library my_library. The model has two parameters: a and b.
I successfully managed to run a single parametric run by using the following code:
from OMPython import OMCSessionZMQ
omc = OMCSessionZMQ()
omc.sendExpression('loadModel(my_library)')
omc.sendExpression('simulate(my_library.my_model, simflags="-overrideFile=parameter_sweep.txt", stopTime=86400)')
where the file parameter_sweep.txt is:
a=5
b=6
Now the question is: how can I run multiple parametric runs? I could add one more line to the code where a new txt file (parameter_sweep1.txt) with a new set of values for the parameters is used:
from OMPython import OMCSessionZMQ
omc = OMCSessionZMQ()
omc.sendExpression('loadModel(my_library)')
omc.sendExpression('simulate(my_library.my_model, simflags="-overrideFile=parameter_sweep.txt", stopTime=86400)')
omc.sendExpression('simulate(my_library.my_model, simflags="-overrideFile=parameter_sweep1.txt", stopTime=86400)')
However, I am afraid that in this way there is the need to recompile. Is there a way to do multiple parametric runs and avoid re-compilation?
Use the buildModel command instead of simulate Then start the process manually in Python using a library such as subprocess. The command is simply something like:
["./my_library.my_model", "-overrideFile=parameter_sweep.txt"]
(If you use Windows, I believe you need to update your PATH environment variable as well, in order to find the used DLLs. If you use Linux, it just works.)

Is there a way to call and store data defined in a python script in to julia?

I have a python code which generates a weighted random graph. I want to use the weights generated in that code in a different Julia program. I am able to run the python code through Julia by using PyCall. But I am unable to get any of the data from the graph. Is there any way to do that?
The 'wt' stores the edge data in the python code.
When I am printing 'wt' in the python code it prints the nodes between which the edge is present and the weights.
This is giving me the required graph. I want to call 'wt' in Julia. How can I do that?
Python code
wt = G.edges.data('weight')
print(wt)
Julia code
using PyCall
y = py"exec(open('wtgraph.py').read())"
For your example it would be something like this (you didn't provide the complete code):
using PyCall
py"""
import something as G
def py_function(x):
return G.edges.data('weight')
"""
wt = py"py_function"('weight')

Improve Runtime for calling R function in python function using rpy2

I primarily program in python (using jupyter notebooks) but on occasion need to use an R function. I currently do this by using rpy2 and R magic, which works fine. Now I would like to write a function which will summarize part of my analysis procedure into one wrapper function (so I don't always need to run all of the code cells but can simply execute the function once). As part of this procedure I need to call an R function. I adapted my code to import the R function to python using the rpy2.robjects interface with importr. This works but is extremely slow (more than triple the run time for an already lengthy procedure) which makes this simply not feasible analysiswise. I am assuming this has to do with me accessing R through the high-level interface of rpy2 instead of the low-level interface. I am unsure of how to use the low-level interface within a function call though and would need some help adapting my code.
I've tried looking into the rpy2 documentation but am struggling to understand it.
This is my code for executing the R function call from within python using R magic.
Activating rpy2 R magic
%load_ext rpy2.ipython
Load my required libaries
%%R
library(scran)
Actually call the R function
%%R -i data_mat -i input_groups -o size_factors
size_factors = computeSumFactors(data_mat, clusters=input_groups, min.mean=0.1)
This is my alternative code to import the R function using rpy2 importr.
from rpy2.robjects.packages import importr
scran = importr('scran')
computeSumFactors = scran.computeSumFactors
size_factors = computeSumFactors(data_mat, clusters=input_groups, min_mean=0.1)
For some reason this second approach is orders of magnitude slower.
Any help would be much apreciated.
The only difference between the two that I would see have an influence on the observe execution speed is conversion.
When running in an "R magic" code cell (prefixed with %%R), in your example the result of calling computeSumFactors() is an R object bound to the symbol size_factors in R. In the other case, the result of calling the function computeSumFactors() will go through the conversion system (and there what exactly happens depends on what are the active converters you have) before the result is bound to the Python symbol size_factors.
Conversion can be costly: you should consider trying to deactivate numpy / pandas conversion (the localconverter context manager can be a convenient way to temporarily use minimal conversion for a code block).

How do you import a Python library within an R package using rPython?

The basic question is this: Let's say I was writing R functions which called python via rPython, and I want to integrate this into a package. That's simple---it's irrelevant that the R function wraps around Python, and you proceed as usual. e.g.
# trivial example
# library(rPython)
add <- function(x, y) {
python.assign("x", x)
python.assign("y", y)
python.exec("result = x+y")
result <- python.get("result")
return(result)
}
But what if the python code with R functions require users to import Python libraries first? e.g.
# python code, not R
import numpy as np
print(np.sin(np.deg2rad(90)))
# R function that call Python via rPython
# *this function will not run without first executing `import numpy as np`
print_sin <- function(degree){
python.assign("degree", degree)
python.exec('result = np.sin(np.deg2rad(degree))')
result <- python.get('result')
return(result)
}
If you run this without importing the library numpy, you will get an error.
How do you import a Python library in an R package? How do you comment it with roxygen2?
It appears the R standard is this:
# R function that call Python via rPython
# *this function will not run without first executing `import numpy as np`
print_sin <- function(degree){
python.assign("degree", degree)
python.exec('import numpy as np')
python.exec('result = np.sin(np.deg2rad(degree))')
result <- python.get('result')
return(result)
}
Each time you run an R function, you will import an entire Python library.
As #Spacedman and #DirkEddelbuettel suggest you could add a .onLoad/.onAttach function to your package that calls python.exec to import the modules that will typically always be required by users of your package.
You could also test whether the module has already been imported before importing it, but (a) that gets you into a bit of a regression problem because you need to import sys in order to perform the test, (b) the answers to that question suggest that at least in terms of performance, it shouldn't matter, e.g.
If you want to optimize by not importing things twice, save yourself the hassle because Python already takes care of this.
(although admittedly there is some quibblingdiscussion elsewhere on that page about possible scenarios where there could be a performance cost).
But maybe your concern is stylistic rather than performance-oriented ...

Syntax for rpy2 base.with function

I'm trying to figure out rpy2 for plotting some graphs. I'd like to be able to use the with function that's part of R's base like it's used it the following R code:
with(res, plot(log2FoldChange, -log10(pvalue), pch=20, main="Volcano plot", xlim=c(-2.5,2)))
with(subset(res, padj<.05 ), points(log2FoldChange, -log10(pvalue), pch=20, col="red"))
Where res is a dataframe and log2FoldChange and pvalue are columns from that dataframe.
When I import the base package using rpy2's importr I can see that 'with' is in the object by doing:
from rpy2.robjects.packages import importr
base = importr('base')
dir(base)
However, I can't seem to figure out the correct syntax:
from rpy2.robjects.packages import importr
from rpy2 import robjects
base = importr('base')
base.with(res, robjects.r.plot(log2FoldChange, padj))
File "<stdin>", line 1
base.with(res, robjects.r.plot(log2FoldChange, padj))
^
SyntaxError: invalid syntax
Unfortunately, searching for something like 'base.with' has proven intractable. My question: what is the syntax for using 'base.with' in rpy2 python code?
Alternatively, while using 'with' is the most R forward approach to doing this, perhaps there's a more rpy2 friendly approach to this same problem that I'm unaware of.
Python might be getting a conflict with its own with() command which requires a space right after it. This is the challenge of interfacing with another language.
Try running the command natively in R syntax wrapped around the robjects function. Below I pass Python objects into R's global environment scope.
import rpy2.robjects as ro
ro.globalenv['res'] = res_frompy
ro.globalenv['log2FoldChang'] = log2FoldChang_frompy
ro.globalenv['padj'] = padj_frompy
ro.r('with(res, plot(log2FoldChange, padj))')

Categories