Related
I'd like to fit some data with an exponential function. I used scipy.optimize.curve_fit because I already used it for other fits. This time, there is an issue and I can't figure out what's wrong.
Here is what the data looks like when plotted :
data.png
as you see it seems to follow an exponential law.
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
data = np.array([
0., 1.93468444, 3.69735865, 5.38185988, 6.02549022,
6.69199075, 7.72316694, 8.08913061, 8.84570241, 8.69711608,
8.80038144, 9.78951087, 9.68486674, 10.06175145, 10.44039495,
10.0481156 , 9.76656204, 9.88581457, 9.81805445, 10.42432252,
10.41102239, 11.2911395 , 9.64866184, 9.98072231, 10.83644694,
10.24748571, 10.81333209, 10.75949899, 10.90367328, 10.42446764,
10.51441017, 10.73047737, 10.8159758 , 10.51013538, 10.02862504,
9.76352112, 10.64829309, 10.6293347 , 10.67752596, 10.34801542,
10.53158576, 10.92883362, 10.67002314, 10.37015825, 10.74876349,
10.12821343, 10.8974205 , 10.1591103 , 10.588377 , 11.92134556,
10.309095 , 11.1174362 , 10.72654524, 10.60890374, 10.37456491,
10.05935346, 11.21295863, 11.09013951, 10.60862773, 11.2558922 ,
11.24660234, 10.35981557, 10.81284365, 10.96113067, 10.22716439,
9.8394873 , 10.01892084, 10.38237311, 10.04920671, 10.87782442,
10.42438756, 10.05614503, 10.5446946 , 9.99974368, 10.76930547,
10.22164072, 10.36942999, 10.89888302, 10.47035428, 10.58157374,
11.12615892, 11.30866718, 10.33215937, 10.46723351, 10.54072701,
11.45027197, 10.45895588, 10.34176601, 10.78405493, 10.43964778,
10.34047484, 10.25099046, 11.05847515, 10.27408195, 10.27529163,
10.16568845, 10.86451738, 10.73205291, 10.73300649, 10.49463959,
10.03729782
])
t = np.linspace(0, 100, len(data)) #time array
def expo(x, a, b, c): #exponential function for fitting
return a * np.exp(b * x) + c
fig1, ax1 = plt.subplots()
ax1.plot(t, data, ".", label="data")
coefs = curve_fit(expo, t, data)[0] # fitting
ax1.plot(t, expo(t, coefs[0], coefs[1], coefs[2]), "-", label="fit")
ax1.legend()
plt.show()
The problem is that curve_fit() returns very big or very small coefficients a,b and c while it should return something more like a = -10.5, b = -0.2, c = 10.5
The fitting process works by finding a local minimum of a loss function.
If the problem is unconstrained, there may be several such local minima,
each giving different values of parameters, and you may get a different one
than the one that you are expecting.
If you have a guess what the parameters should be, you can provide it to narrow the search:
# with an initial guess for values of a, b, c
coefs = curve_fit(expo, t, data, p0=[-10, -1, 10])[0]
The coefficients it produces are:
array([-10.48815244, -0.2091102 , 10.56699883])
Alternatively, you can specify bonds for the parameters:
# with lower and upper bounds for a, b, c
coefs = curve_fit(expo, t, data, bounds=([-20, -2, 0], [-10, 2, 20]))[0]
This gives the same results as above.
Probably a non-linear regression algorithm is implemented in your software.
"Guessed" initial values of the parameters are required to start the iterative process. If no initial value is provided by the user, some initial values are evaluated by the software. That is often a cause of failure because the computed initial values might be too far from the correct values.
Some good initial values can be found in using a linear regression method which doesn't requires initial values. See the calculus below.
The result is :
If the accuracy of the above result is not sufficient according to some specified criteria of fitting, a non-linear regression is necessary. In this case the above values of the parameters $a,b,c$ can be used as initial values to initiate the iterative calculus.
Note : The principle of the method which lineraizes the non-linear regression as shown above is explained in : https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales
Here is what i tried, used negative b in np.exp
def expo(x,a,b,c):
return a*np.exp(-b*x) + c
>>>[-10.4881516 0.20911016 10.5669989 ]
I asked a since deleted question regarding how to determine Fourier coefficients from time series data. I am resubmitting this because I have better formulated the problem and have a solution that I'll give as I think others may find this very useful.
I have some time series data that I have binned into equally spaced time bins (a fact which will be crucial to my solution), and from that data I want to determine the Fourier series (or any function, really) that best describes the data. Here is a MWE with some test data to show the data I'm trying to fit:
import numpy as np
import matplotlib.pyplot as plt
# Create a dependent test variable to define the x-axis of the test data.
test_array = np.linspace(0, 1, 101) - 0.5
# Define some test data to try to apply a Fourier series to.
test_data = [0.9783883464566918, 0.979599093567252, 0.9821424606299206, 0.9857575507812502, 0.9899278899999995,
0.9941848228346452, 0.9978438300395263, 1.0003009205426352, 1.0012208923679058, 1.0017130521235522,
1.0021799664031628, 1.0027475606936413, 1.0034168260869563, 1.0040914266144825, 1.0047781181102355,
1.005520348837209, 1.0061899214145387, 1.006846206627681, 1.0074483048543692, 1.0078691461988312,
1.008318736328125, 1.008446947572815, 1.00862051262136, 1.0085134881422921, 1.008337095516569,
1.0079539881889774, 1.0074857334630352, 1.006747783037474, 1.005962048923679, 1.0049115434782612,
1.003812267822736, 1.0026427549407106, 1.001251963531669, 0.999898555335968, 0.9984976286266923,
0.996995982142858, 0.9955652088974847, 0.9941647321428578, 0.9927727076023389, 0.9914750532544377,
0.990212467710371, 0.9891098035363466, 0.9875998927875242, 0.9828093773946361, 0.9722532524271845,
0.9574084365384614, 0.9411012303149601, 0.9251820309477757, 0.9121488392156851, 0.9033119748549322,
0.9002445803921568, 0.9032760564202343, 0.91192435882353, 0.9249696964980555, 0.94071381372549,
0.957139088974855, 0.9721083392156871, 0.982955287937743, 0.9880613320235758, 0.9897455322896282,
0.9909590626223097, 0.9922601592233015, 0.9936513112840472, 0.9951442427184468, 0.9967071285988475,
0.9982921493123781, 0.9998775465116277, 1.001389230174081, 1.0029109110251453, 1.0044033691406251,
1.0057110841487276, 1.0069551867704276, 1.008118776264591, 1.0089884470588228, 1.0098663972602735,
1.0104514566473979, 1.0109849223300964, 1.0112043902912626, 1.0114717968750002, 1.0113343036750482,
1.0112205972495087, 1.0108811786407768, 1.010500276264591, 1.0099054552529192, 1.009353759223301,
1.008592596116505, 1.007887223091976, 1.0070715634615386, 1.0063525891472884, 1.0055587861271678,
1.0048733732809436, 1.0041832862669238, 1.0035913326848247, 1.0025318871595328, 1.000088536345776,
0.9963596140350871, 0.9918380684931506, 0.9873937281553398, 0.9833394624277463, 0.9803621496062999,
0.9786476100386117]
# Create a figure to view the data.
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
# Plot the data.
ax.scatter(test_array, test_data, color="k", s=1)
This outputs the following:
The question is how to determine the Fourier series best describing this data. The usual formula for determining the Fourier coefficients requires inserting a function into an integral, but if I had a function to describe the data I wouldn't need the Fourier coefficients at all; the whole point of finding this series is to have a functional representation of the data. In the absence of such a function, then, how are the coefficients found?
My solution to this problem is to apply a discrete Fourier transform to the data using NumPy's implementation of the Fast Fourier Transform, numpy.fft.fft(); this is why it's critical that the data is evenly spaced in time, as FFT requires this. While the FFT is typically used to perform analysis of the frequency spectrum, the desired Fourier coefficients are directly related to the output of this function.
Specifically, this function outputs a series of i complex-valued coefficients c. The Fourier series coefficients are found using the relations:
Therefore the FFT allows the Fourier coefficients to be directly computed. Here is the MWE of my solution to this problem, expanding the example given above:
import numpy as np
import matplotlib.pyplot as plt
# Set the number of equal-time bins to create.
n_bins = 101
# Set the number of Fourier coefficients to use.
n_coeff = 51
# Define a function to generate a Fourier series based on the coefficients determined by the Fast Fourier Transform.
# This also includes a series of phases x to pass through the function.
def create_fourier_series(x, coefficients):
# Begin the series with the zeroeth-order Fourier coefficient.
fourier_series = coefficients[0][0] / 2
# Now generate the first through n_coeff'th terms. The period is defined to be 1 since we're operating in phase
# space.
for n in range(1, n_coeff):
fourier_series += (fourier_coeff[n][0] * np.cos(2 * np.pi * n * x) + fourier_coeff[n][1] *
np.sin(2 * np.pi * n * x))
return fourier_series
# Create a dependent test variable to define the x-axis of the test data.
test_array = np.linspace(0, 1, n_bins) - 0.5
# Define some test data to try to apply a Fourier series to.
test_data = [0.9783883464566918, 0.979599093567252, 0.9821424606299206, 0.9857575507812502, 0.9899278899999995,
0.9941848228346452, 0.9978438300395263, 1.0003009205426352, 1.0012208923679058, 1.0017130521235522,
1.0021799664031628, 1.0027475606936413, 1.0034168260869563, 1.0040914266144825, 1.0047781181102355,
1.005520348837209, 1.0061899214145387, 1.006846206627681, 1.0074483048543692, 1.0078691461988312,
1.008318736328125, 1.008446947572815, 1.00862051262136, 1.0085134881422921, 1.008337095516569,
1.0079539881889774, 1.0074857334630352, 1.006747783037474, 1.005962048923679, 1.0049115434782612,
1.003812267822736, 1.0026427549407106, 1.001251963531669, 0.999898555335968, 0.9984976286266923,
0.996995982142858, 0.9955652088974847, 0.9941647321428578, 0.9927727076023389, 0.9914750532544377,
0.990212467710371, 0.9891098035363466, 0.9875998927875242, 0.9828093773946361, 0.9722532524271845,
0.9574084365384614, 0.9411012303149601, 0.9251820309477757, 0.9121488392156851, 0.9033119748549322,
0.9002445803921568, 0.9032760564202343, 0.91192435882353, 0.9249696964980555, 0.94071381372549,
0.957139088974855, 0.9721083392156871, 0.982955287937743, 0.9880613320235758, 0.9897455322896282,
0.9909590626223097, 0.9922601592233015, 0.9936513112840472, 0.9951442427184468, 0.9967071285988475,
0.9982921493123781, 0.9998775465116277, 1.001389230174081, 1.0029109110251453, 1.0044033691406251,
1.0057110841487276, 1.0069551867704276, 1.008118776264591, 1.0089884470588228, 1.0098663972602735,
1.0104514566473979, 1.0109849223300964, 1.0112043902912626, 1.0114717968750002, 1.0113343036750482,
1.0112205972495087, 1.0108811786407768, 1.010500276264591, 1.0099054552529192, 1.009353759223301,
1.008592596116505, 1.007887223091976, 1.0070715634615386, 1.0063525891472884, 1.0055587861271678,
1.0048733732809436, 1.0041832862669238, 1.0035913326848247, 1.0025318871595328, 1.000088536345776,
0.9963596140350871, 0.9918380684931506, 0.9873937281553398, 0.9833394624277463, 0.9803621496062999,
0.9786476100386117]
# Determine the fast Fourier transform for this test data.
fast_fourier_transform = np.fft.fft(test_data[n_bins / 2:] + test_data[:n_bins / 2])
# Create an empty list to hold the values of the Fourier coefficients.
fourier_coeff = []
# Loop through the FFT and pick out the a and b coefficients, which are the real and imaginary parts of the
# coefficients calculated by the FFT.
for n in range(0, n_coeff):
a = 2 * fast_fourier_transform[n].real / n_bins
b = -2 * fast_fourier_transform[n].imag / n_bins
fourier_coeff.append([a, b])
# Create the Fourier series approximating this data.
fourier_series = create_fourier_series(test_array, fourier_coeff)
# Create a figure to view the data.
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
# Plot the data.
ax.scatter(test_array, test_data, color="k", s=1)
# Plot the Fourier series approximation.
ax.plot(test_array, fourier_series, color="b", lw=0.5)
This outputs the following:
Note that how I defined the FFT (importing the second half of the data followed by the first half) is a consequence of how this data was generated. Specifically, the data runs from -0.5 to 0.5, but the FFT assumes it runs from 0.0 to 1.0, necessitating this shift.
I've found that this works quite well for data that doesn't include very sharp and narrow discontinuities. I would be interested to hear if anyone has another suggested solution to this problem, and I hope people find this explanation clear and helpful.
Not sure if it helps you in anyway; I wrote a programme to interpoplate your data. This is done using buildingblocks==0.0.15
Please see below,
import matplotlib.pyplot as plt
from buildingblocks import bb
import numpy as np
Ydata = [0.9783883464566918, 0.979599093567252, 0.9821424606299206, 0.9857575507812502, 0.9899278899999995,
0.9941848228346452, 0.9978438300395263, 1.0003009205426352, 1.0012208923679058, 1.0017130521235522,
1.0021799664031628, 1.0027475606936413, 1.0034168260869563, 1.0040914266144825, 1.0047781181102355,
1.005520348837209, 1.0061899214145387, 1.006846206627681, 1.0074483048543692, 1.0078691461988312,
1.008318736328125, 1.008446947572815, 1.00862051262136, 1.0085134881422921, 1.008337095516569,
1.0079539881889774, 1.0074857334630352, 1.006747783037474, 1.005962048923679, 1.0049115434782612,
1.003812267822736, 1.0026427549407106, 1.001251963531669, 0.999898555335968, 0.9984976286266923,
0.996995982142858, 0.9955652088974847, 0.9941647321428578, 0.9927727076023389, 0.9914750532544377,
0.990212467710371, 0.9891098035363466, 0.9875998927875242, 0.9828093773946361, 0.9722532524271845,
0.9574084365384614, 0.9411012303149601, 0.9251820309477757, 0.9121488392156851, 0.9033119748549322,
0.9002445803921568, 0.9032760564202343, 0.91192435882353, 0.9249696964980555, 0.94071381372549,
0.957139088974855, 0.9721083392156871, 0.982955287937743, 0.9880613320235758, 0.9897455322896282,
0.9909590626223097, 0.9922601592233015, 0.9936513112840472, 0.9951442427184468, 0.9967071285988475,
0.9982921493123781, 0.9998775465116277, 1.001389230174081, 1.0029109110251453, 1.0044033691406251,
1.0057110841487276, 1.0069551867704276, 1.008118776264591, 1.0089884470588228, 1.0098663972602735,
1.0104514566473979, 1.0109849223300964, 1.0112043902912626, 1.0114717968750002, 1.0113343036750482,
1.0112205972495087, 1.0108811786407768, 1.010500276264591, 1.0099054552529192, 1.009353759223301,
1.008592596116505, 1.007887223091976, 1.0070715634615386, 1.0063525891472884, 1.0055587861271678,
1.0048733732809436, 1.0041832862669238, 1.0035913326848247, 1.0025318871595328, 1.000088536345776,
0.9963596140350871, 0.9918380684931506, 0.9873937281553398, 0.9833394624277463, 0.9803621496062999,
0.9786476100386117]
Xdata=list(range(0,len(Ydata)))
Xnew=list(np.linspace(0,len(Ydata),200))
Ynew=bb.interpolate(Xdata,Ydata,Xnew,40)
plt.figure()
plt.plot(Xdata,Ydata)
plt.plot(Xnew,Ynew,'*')
plt.legend(['Given Data', 'Interpolated Data'])
plt.show()
Should you want to further write code, I have also give code so that you can see the source code and learn:
import module
import inspect
src = inspect.getsource(module)
print(src)
When using scipy.ndimage.interpolation.shift to shift a numpy data array along one axis with periodic boundary treatment (mode = 'wrap'), I get an unexpected behavior. The routine tries to force the first pixel (index 0) to be identical to the last one (index N-1) instead of the "last plus one (index N)".
Minimal example:
# module import
import numpy as np
from scipy.ndimage.interpolation import shift
import matplotlib.pyplot as plt
# print scipy.__version__
# 0.18.1
a = range(10)
plt.figure(figsize=(16,12))
for i, shift_pix in enumerate(range(10)):
# shift the data via spline interpolation
b = shift(a, shift=shift_pix, mode='wrap')
# plotting the data
plt.subplot(5,2,i+1)
plt.plot(a, marker='o', label='data')
plt.plot(np.roll(a, shift_pix), marker='o', label='data, roll')
plt.plot(b, marker='o',label='shifted data')
if i == 0:
plt.legend(loc=4,fontsize=12)
plt.ylim(-1,10)
ax = plt.gca()
ax.text(0.10,0.80,'shift %d pix' % i, transform=ax.transAxes)
Blue line: data before the shift
Green line: expected shift behavior
Red line: actual shift output of scipy.ndimage.interpolation.shift
Is there some error in how I call the function or how I understand its behavior with mode = 'wrap'? The current results are in contrast to the mode parameter description from the related scipy tutorial page and from another StackOverflow post. Is there an off-by-one-error in the code?
Scipy version used is 0.18.1, distributed in anaconda-2.2.0
It seems that the behaviour you have observed is intentional.
The cause of the problem lies in the C function map_coordinate which translates the coordinates after shift to ones before shift:
map_coordinate(double in, npy_intp len, int mode)
The function is used as the subroutine in NI_ZoomShift that does the actual shift. Its interesting part looks like this:
Example. Lets see how the output for output = shift(np.arange(10), shift=4, mode='wrap') (from the question) is computed.
NI_ZoomShift computes edge values output[0] and output[9] in some special way, so lets take a look at computation of output[1] (a bit simplified):
# input = [0,1,2,3,4,5,6,7,8,9]
# output = [ ,?, , , , , , , , ] '?' == computed position
# shift = 4
output_index = 1
in = output_index - shift # -3
sz = 10 - 1 # 9
in += sz * ((-5 / 9) + 1)
# += 9 * (( 0) + 1) == 9
# in == 6
return input[in] # 6
It is clear that sz = len - 1 is responsible for the behaviour you have observed. It was changed from sz = len in a suggestively named commit dating back to 2007: Fix off-by-on errors in ndimage boundary routines. Update tests.
I don't know why such change was introduced. One of the possible explanations that come to my mind is as follows:
Function 'shift' uses splines for interpolation.
A knot vector of an uniform spline on interval [0, k] is simply [0,1,2,...,k]. When we say that the spline should wrap, it is natural to require equality on values for knots 0 and k, so that many copies of the spline could be glued together, forming a periodic function:
0--1--2--3-...-k 0--1--2--3-...-k 0--1-- ...
0--1--2--3-...-k 0--1--2--3-...-k ...
Maybe shift just treats its input as a list of values for spline's knots?
It is worth noting that this behavior appears to be a bug, as noted in this SciPy issue:
https://github.com/scipy/scipy/issues/2640
The issue appears to effect every extrapolation mode in scipy.ndimage other than mode='mirror'.
Using scipy's splrep I can easily fit a test sinewave:
import numpy as np
from scipy.interpolate import splrep, splev
import matplotlib.pyplot as plt
plt.style.use("ggplot")
# Generate test sinewave
x = np.arange(0, 20, .1)
y = np.sin(x)
# Interpolate
tck = splrep(x, y)
x_spl = x + 0.05 # Just to show it wors
y_spl = splev(x_spl, tck)
plt.plot(x_spl, y_spl)
The splrep documentation states that the default value for the weight parameter is np.ones(len(x)). However, plotting this results in a totally different plot:
tck = splrep(x, y, w=np.ones(len(x_spl)))
y_spl = splev(x_spl, tck)
plt.plot(x_spl, y_spl)
The documentation also states that the smoothing condition s is different when a weight array is given - but even when setting s=len(x_spl) - np.sqrt(2*len(x_spl)) (the default value without a weight array) the result does not strictly correspond to the original curve as shown in the plot.
What do I need to change in the code listed above in order to make the interpolation with weight array (as listed above) output the same result as the interpolation without the weights?
I have tested this with scipy 0.17.0. Gist with a test IPython notebook
You only have to change one line of your code to get the identical output:
tck = splrep(x, y, w=np.ones(len(x_spl)))
should become
tck = splrep(x, y, w=np.ones(len(x_spl)), s=0)
So, the only difference is that you have to specify s instead of using the default one.
When you look at the source code of splrep you will see why that is necessary:
if w is None:
w = ones(m, float)
if s is None:
s = 0.0
else:
w = atleast_1d(w)
if s is None:
s = m - sqrt(2*m)
which means that, if neither weights nor s are provided, s is set to 0 and if you provide weights but no s then s = m - sqrt(2*m) where m = len(x).
So, in your example above you compare outputs with the same weights but with different s (which are 0 and m - sqrt(2*m), respectively).
I am new to sympy but I already get a nice output when I plot the implicit function (actually the formula for Cassini's ovals) using sympy:
from sympy import plot_implicit, symbols, Eq, solve
x, y = symbols('x y')
k=2.7
a=3
eq = Eq((x**2 + y**2)**2-2*a**2*(x**2-y**2), k**4-a**4)
plot_implicit(eq)
Now is it actually possible to somehow get the x and y values corresponding to the plot? or alternatively solve the implicit equation without plotting at all?
thanks! :-)
This is an answer addressing your
is it actually possible to somehow get the x and y values corresponding to the plot?
and I say "addressing" because it's not possible to get the x and y values used to draw the curves — because the curves are not drawn using a sequenc of 2D points… more on this later,
TL;DR
pli = plot_implicit(...)
series = pli[0]
data, action = series.get_points()
data = np.array([(x_int.mid, y_int.mid) for x_int, y_int in data])
Let's start with your code
from sympy import plot_implicit, symbols, Eq, solve
x, y = symbols('x y')
k=2.7
a=3
eq = Eq((x**2 + y**2)**2-2*a**2*(x**2-y**2), k**4-a**4)
and plot it, with a twist: we save the Plot object and print it
pli = plot_implicit(eq)
print(pli)
to get
Plot object containing:
[0]: Implicit equation: Eq(-18*x**2 + 18*y**2 + (x**2 + y**2)**2, -27.8559000000000) for x over (-5.0, 5.0) and y over (-5.0, 5.0)
We are interested in this object indexed by 0,
ob = pli[0]
print(dir(ob))
that gives (ellipsis are mine)
['__class__', …, get_points, …, 'var_y']
The name get_points sounds full of promise, doesn't it?
print(ob.get_points())
that gives (edited for clarity and with a big cut)
([
[interval(-3.759774, -3.750008), interval(-0.791016, -0.781250)],
[interval(-3.876961, -3.867195), interval(-0.634768, -0.625003)],
[interval(-3.837898, -3.828133), interval(-0.693361, -0.683596)],
[interval(-3.847664, -3.837898), interval(-0.673830, -0.664065)],
...
[interval(3.837895, 3.847661), interval(0.664064, 0.673830)],
[interval(3.828130, 3.837895), interval(0.683596, 0.693362)],
[interval(3.867192, 3.876958), interval(0.625001, 0.634766)],
[interval(3.750005, 3.759770), interval(0.781255, 0.791021)]
], 'fill')
What is this? the documentation of plot_implicit has
plot_implicit, by default, uses interval arithmetic to plot functions.
Following the source code of plot_implicit.py and plot,py one realizes that, in this case, the actual plotting (speaking of the matpolotlib backend) is just a line of code
self.ax.fill(x, y, facecolor=s.line_color, edgecolor='None')
where x and y are constructed from the list of intervals, as returned from .get_points(), as follows
x, y = [], []
for intervals in interval_list:
intervalx = intervals[0]
intervaly = intervals[1]
x.extend([intervalx.start, intervalx.start,
intervalx.end, intervalx.end, None])
y.extend([intervaly.start, intervaly.end,
intervaly.end, intervaly.start, None])
so that for each couple of intervals matplotlib is directed to draw a filled rectangle, small enough that the eye sees a continuous line (note the use of None to have disjoint rectangles).
We can conclude that the list of couples of intervals
l_xy_intervals = ((pli[0]).get_points())[0]
represents rectangular areas where the implicit expression you are plotting is
"true enough"
You can do this, even with interval math, if you try getting the mid point of each interval. Starting from your code, and slightly change it, by saving the plot_implicit object in a variable called g we have:
from sympy import plot_implicit, symbols, Eq, solve
x, y = symbols('x y')
k=2.7
a=3
eq = Eq((x**2 + y**2)**2-2*a**2*(x**2-y**2), k**4-a**4)
g = plot_implicit(eq)
Now let's save in a variable named ptos the intervals that were used to draw the plot.
ptos = g[0].get_points()[0]
This way ptos[0][0] will be the first interval in the x axis and ptos[0][1] will be its pair in the y axis. The intervals have a property called mid which gives the middle point of the interval. So you can suppose that ptos[0][0].mid, ptos[0][1].mid will be a pair x,y "true enough" to be one of our numerical solutions.
This way, a data frame composed of this middle point pairs can be generated with:
intervs = np.array(dtype='object')
meio = lambda x0:x0.mid
px = list(map(meio, intervs[:,0]))
py = list(map(meio, intervs[:,1]))
import pandas as pd
dados = pd.DataFrame({'x':px, 'y':px})
dados.head()
Which in this example would give us:
x y
0 -1.177733 0.598826
1 -1.175389 0.596483
2 -1.175389 0.598826
3 -1.173045 0.596483
4 -1.173045 0.598826
This idea of getting the intervals middle points can be used whenever one needs to move from "interval math" to "standard" point level math. Hope this helps. Regards.