-
-
Notifications
You must be signed in to change notification settings - Fork 26.9k
Open
Description
Hello,
I'm using kernel pca to reduce dimensionality and I need eigenvalues and eigenvectors. In PCA, I know pca.explained_variance_ is eigenvalues and pca.components_ is eigenvectors. I read the sklearn document and found the below words in kpca.
lambdas_ : array, (n_components,)
Eigenvalues of the centered kernel matrix in decreasing order.
alphas_ : array, (n_samples, n_components)
Eigenvectors of the centered kernel matrix.
Compare with pca's document, I'm confused about why the eigenvectors's shape is not equal. In pca, the shape is (n_components, n_features) while kpca is (n_samples, n_components). Here is pca's document.
explained_variance_array, shape (n_components,)
The amount of variance explained by each of the selected components.Equal to n_components largest eigenvalues of the covariance matrix of X.
components_ : array, shape (n_components, n_features)
Principal axes in feature space, representing the directions of maximum variance in the data. The components are sorted by explained_variance_.
I know if the kpca's kernel is linear, it is exactly pca. So I want know how to get eigenvalues and eigenvectors in kpca. Can you help me?
Reactions are currently unavailable