from sklearn.decomposition import PCA as sklearnPCA
pca = PCA(n_components=2)
pca_representation = pca.fit_transform(dataset_norm[index])
import numpy as np
from sklearn.datasets import make_s_curve
import matplotlib.pyplot as plt
from sklearn.manifold import Isomap as sklearnisomap
from mpl_toolkits.mplot3d import Axes3D
iso = Isomap(n_components=2, n_neighbors=40)
iso_representation = iso.fit_transform(dataset_norm[index])
I use google colab.
When I run the code:
iso_representation = iso.fit_transform(dataset_norm[index]),
It doesn't work, the system message says: Your session crashed after using all available RAM.
But the PCA module is working correctly, I have looked up many answers but can't solve this problem, and other codes are not working correctly, am I overlooking something?
Related
I'm trying to run the code below, but I keep getting the same error : "ModuleNotFoundError: No module named 'basic_units'".
import sympy as sym
import math
import numpy as np
import os
import matplotlib.pyplot as plt
from basic_units import radians
fig, ax = plt.subplots(subplot_kw={'projection': 'polar'})
ax.plot([np.pi, np.pi], [0, 10], xunits=radians)
plt.show()
I've seen that used in other code, but I just can't get it to work.
I was trying to make a polar plot with the angles in radians, and this seemed to be the only solution I could find, so I tried to run this test, but I encountered this error
basic_units is not a python package, you should have a look for official matplotlib document:
Radian ticks#
Plot with radians from the basic_units mockup example package.
This example shows how the unit class can determine the tick locating,
formatting and axis labeling.
This example requires basic_units.py
import matplotlib.pyplot as plt
import numpy as np
from basic_units import radians, degrees, cos
You could download above source code & put in your project.
Did you download basic_units.py file as numpy documentation mentions (https://matplotlib.org/stable/gallery/units/radian_demo.html)? If yes, you have to provide a valid path in the import statement. Now you're trying to import it as a module, but it should be a file. Like:
from .basic_units import radians
or
from my_code.src.basic_units import radians
I'm trying to use the following:
from fireTS.models import NARX, DirectAutoRegressor
from sklearn.ensemble import RandomForestRegressor
from xgboost import XGBRegressor
import numpy as np
import scipy
import sklearn
import pandas as pd
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
However, upon running the first line, an error saying:
ModuleNotFoundError: No module named 'sklearn.metrics.regression'
Interestingly, I cannot find anything on the web about this problem (even in the recently asked question in stackoverflow about this 26+ days ago).
Anyone who have encountered the same and was bale to fix this?
EDIT:
SO I FOUND THE FIX.
I went to the library where my firets is located and clicked models.py.
I changed the following:
from sklearn.metrics.regression import r2_score, mean_squared_error
to
from sklearn.metrics import r2_score, mean_squared_error
and hola, NO MORE ERRORS :)
I run visual studio code on Windows 10. Current version of my visual studio code is 1.45.1. I am running a Python (version 3.7.4 64-bit) on it. My visual studio code takes 5 to 10 minutes to load following libraries:
import pandas as pd
import numpy as np
from datetime import datetime, date
from scipy.stats import ttest_ind
from scipy.stats import ttest_ind_from_stats
from scipy.stats import ttest_1samp
import seaborn as sns
import matplotlib.pyplot as plt
I tried to find a solution online to speed up, but couldn't find it. Is anyone experience the same issue? How could we speed up the visual code?
from time import perf_counter
a=perf_counter()
import pandas as pd
b=perf_counter()
import numpy as np
c=perf_counter()
from datetime import datetime, date
d=perf_counter()
from scipy.stats import ttest_ind
e=perf_counter()
from scipy.stats import ttest_ind_from_stats
f=perf_counter()
from scipy.stats import ttest_1samp
g=perf_counter()
import seaborn as sns
h=perf_counter()
import matplotlib.pyplot as plt
i=perf_counter()
print(b-a)
print(c-b)
print(d-c)
print(e-d)
print(f-e)
print(g-f)
print(h-g)
print(i-h)
Take this to analyze which imports take too much time.
Then take 'cProfile' to do exactly analysis. you can refer to this page.
I'm writing code in a Jupyter Notebook that involves cleaning and analyzing a large amount of consumer data. I'm trying to use dill to save the dataframes with thousands of rows so I don't have to run the code every time I want to make an adjustment, so dill seems like the perfect package to do so... Except I'm getting this error when attempting to pickle the notebook:
AttributeError: module 'dill' has no attribute 'dump_session'
Let me know if the program code is necessary - I don't think it should make a difference. The imports are:
import numpy as np
import pandas as pd
import dill
import scipy
from matplotlib import pyplot as plt
from __future__ import division
from collections import OrderedDict
from sklearn.cluster import KMeans
pd.options.display.max_columns = None
and when I run this code I get the error from above:
dill.dump_session('recengine.db')
Is there another package that's interfering with dill's use of pickle vs. cpickle?
The following block of code
%matplotlib inline
import sys
import pandas as pd
sys.path.append("C:\Users\%USER%\PycharmProjects\cap_rate")
import mongo
import c_lib
import t_lib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
Is taking upwards of ten minutes to run.
Does anyone have an idea as to what may be the root cause of this?
No major warnings/output on my end outside of severe slowness.
I rebooted and it is much faster.