Python math module grad - python

I am trying to use the basic sin, cos, arctan, etc function from numpy, but I want to use gradians. I have search the doc without success, and search for other python modules without luck. Any suggestion on a python module i could use?
Or a function that will work. I have tried different methods to convert grad to rad and back to grad again, but no-one is working.

This one should be fine !
def gradFromRad(rad):
return 200*rad/math.pi
def radFromGrad(grad):
return math.pi*grad/200

Related

No module named 'sklearn.utils.linear_assignment_'

I am trying to run a project from github , every object counter applications using sort algorithm. I can't run any of them because of a specific error, attaching errors screenshot. Can anyone help me about fixing this issue?
The linear_assignment function is deprecated in 0.21 and will be removed from 0.23, but sklearn.utils.linear_assignment_ can be replaced by scipy.optimize.linear_sum_assignment.
You can use:
from scipy.optimize import linear_sum_assignment as linear_assignment
then you can run the file and don't need to change the code.
pip install scikit-learn==0.22.2
As yiakwy points out in a github comment the scipy.optimize.linear_sum_assignment is not the perfect replacement:
I am concerned that linear_sum_assignment is not equivalent to linear_assignment which later implements "maximum values" matching strategy not "complete matching" strategy, i.e. in tracking problem maybe an old landmark lost and a new detection coming in. We don't have to make a complete assignment, just match as more as possible.
I have found this out while trying to use it inside SORT-based yolo tracking code which that replacement broke (I was lucky that it did otherwise, I would get wrong results from the experiments without realising it...)
Instead, I suggest copying the module itself to the last version of sklearn and include as module in your code.
https://github.com/scikit-learn/scikit-learn/blob/0.22.X/sklearn/utils/linear_assignment_.py
For instance if you copy this file into an utils directory import with from utils.linear_assignment_ import linear_assignment
Solution
Use pip to install lap and optionally scipy
Uncomment the import and use the following function
def linear_assignment(cost_matrix):
try:
import lap
_, x, y = lap.lapjv(cost_matrix, extend_cost=True)
return np.array([[y[i], i] for i in x if i >= 0])
except ImportError:
from scipy.optimize import linear_sum_assignment
x, y = linear_sum_assignment(cost_matrix)
return np.array(list(zip(x, y)))
You are getting this error because you haven't install scikit module yet.
Install scikit-learn module from https://pypi.org/project/scikit-learn/

python code that calculate 4*(1-(1/3)+(1/5)-(1/7)+...+(1/2n-1))

I'm interested in math but I don't know a lot about coding on python I want to write a code in python that calculate:
4*(1-(1/3)+(1/5)-(1/7)+...+(1/2n-1))
that convergence to pi. I want a python code so that I import n for example 1,2,3,1000,...and see the answer.
Here.
def pi_approx(n):
return 4*sum([((-1)**i)/(2*i+1) for i in range(n)])

VSCode Itellisense with python C extension module (petsc4py)

I'm currently using a python module called petsc4py (https://pypi.org/project/petsc4py/). My main issue is that none of the typical intellisense features seems to work with this module.
I'm guessing it might have something to do with it being a C extension module, but I am not sure exactly why this happens. I initially thought that intellisense was unable to look inside ".so" files, but it seems that numpy is able to do this with the array object, which in my case is inside a file called multiarray.cpython-37m-x86_64-linux-gnu (check example below).
Does anyone know why I see this behaviour in the petsc4py module. Is there anything that I (or the developers of petsc4py) can do to get intellisense to work?
Example:
import sys
import petsc4py
petsc4py.init(sys.argv)
from petsc4py import PETSc
x_p = PETSc.Vec().create()
x_p.setSizes(10)
x_p.setFromOptions()
u_p = x_p.duplicate()
import numpy as np
x_n = np.array([1,2,3])
u_n = x_n.copy()
In this example, when trying to work with a Vec object from petsc4py, doing u_p.duplicate() cannot find the function and the suggestion is simply a repetition of the function immediately before. However, using an array from numpy, doing u_n.copy() works perfectly.
If you're compiling in-place then you're bumping up against https://github.com/microsoft/python-language-server/issues/197.

How do you import a Python library within an R package using rPython?

The basic question is this: Let's say I was writing R functions which called python via rPython, and I want to integrate this into a package. That's simple---it's irrelevant that the R function wraps around Python, and you proceed as usual. e.g.
# trivial example
# library(rPython)
add <- function(x, y) {
python.assign("x", x)
python.assign("y", y)
python.exec("result = x+y")
result <- python.get("result")
return(result)
}
But what if the python code with R functions require users to import Python libraries first? e.g.
# python code, not R
import numpy as np
print(np.sin(np.deg2rad(90)))
# R function that call Python via rPython
# *this function will not run without first executing `import numpy as np`
print_sin <- function(degree){
python.assign("degree", degree)
python.exec('result = np.sin(np.deg2rad(degree))')
result <- python.get('result')
return(result)
}
If you run this without importing the library numpy, you will get an error.
How do you import a Python library in an R package? How do you comment it with roxygen2?
It appears the R standard is this:
# R function that call Python via rPython
# *this function will not run without first executing `import numpy as np`
print_sin <- function(degree){
python.assign("degree", degree)
python.exec('import numpy as np')
python.exec('result = np.sin(np.deg2rad(degree))')
result <- python.get('result')
return(result)
}
Each time you run an R function, you will import an entire Python library.
As #Spacedman and #DirkEddelbuettel suggest you could add a .onLoad/.onAttach function to your package that calls python.exec to import the modules that will typically always be required by users of your package.
You could also test whether the module has already been imported before importing it, but (a) that gets you into a bit of a regression problem because you need to import sys in order to perform the test, (b) the answers to that question suggest that at least in terms of performance, it shouldn't matter, e.g.
If you want to optimize by not importing things twice, save yourself the hassle because Python already takes care of this.
(although admittedly there is some quibblingdiscussion elsewhere on that page about possible scenarios where there could be a performance cost).
But maybe your concern is stylistic rather than performance-oriented ...

Optimization in Python for simple linear function

If the price charged for a crayon is p cents, then x thousand crayons
will be sold in a certain school store, where p(x)= 122-x/34 .
Using Python, calculate how many crayons must be sold to maximize
revenue.
I can solve this by hand much easily, the only problem is how can I do it using plain Python? I am using IDLE (Python GUI). I am new to Python and haven't downloaded any external libraries. Any help will be greatly appreciated.
What I've done up to this point is
import math
def f(x):
return (122-(x/34.0))
def g(x):
return x*f(x)
def h(x):
return (122-(2*x/34.0))
Use SymPy. It's simple, beautiful and powerful.
You can write down your equations with simpify(), like that:
p = simpify('122 - x/34')
And define symbols for symbolic evaluation with Symbol() and symbols().
With that you can do things like simply use solve() function for any given equation. i.e. x + 4 = 2x:
res = solve('x + 4 - 2*x')
It's pretty much the tool I use for any math work with python.
So, you should go and download an external library for this, as it's not functionality that python makes easy to implement natively. Also, if you're serious about doing mathematical computation in python I would suggest switching operating systems to something like OSX or linux, simply because compiling old FORTRAN libraries (required for much performant mathematical computing) is a huge pain on Windows.
You have to make use of the scipy library here, which has an optimize module. Specifically I would suggest using the optimize.minimize_scalar function. Docs can be found here.
>>> from scipy.optimize import minimize_scalar
>>> def g(x):
... return -(x*(122 - (x/34))) # inverse because you're minimizing.
>>> minimize_scalar(g, bounds=(1, 10000), method='bounded')
status: 0
nfev: 6
success: True
fun: -126514.0
x: 2074.0
message: 'Solution found.'

Categories