You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 23, 2024. It is now read-only.
Minimal API Data for `sklearn/sklearn.manifold._t_sne/TSNE/__init__/n_iter`
{"schemaVersion": 1,"distribution": "scikit-learn","package": "sklearn","version": "1.1.1","modules": [{"id": "sklearn/sklearn.manifold","name": "sklearn.manifold","imports": [],"from_imports": [{"module": "sklearn.manifold._isomap","declaration": "Isomap","alias": null},{"module": "sklearn.manifold._locally_linear","declaration": "locally_linear_embedding","alias": null},{"module": "sklearn.manifold._locally_linear","declaration": "LocallyLinearEmbedding","alias": null},{"module": "sklearn.manifold._mds","declaration": "MDS","alias": null},{"module": "sklearn.manifold._mds","declaration": "smacof","alias": null},{"module": "sklearn.manifold._spectral_embedding","declaration": "spectral_embedding","alias": null},{"module": "sklearn.manifold._spectral_embedding","declaration": "SpectralEmbedding","alias": null},{"module": "sklearn.manifold._t_sne","declaration": "trustworthiness","alias": null},{"module": "sklearn.manifold._t_sne","declaration": "TSNE","alias": null}],"classes": ["sklearn/sklearn.manifold._t_sne/TSNE"],"functions": []}],"classes": [{"id": "sklearn/sklearn.manifold._t_sne/TSNE","name": "TSNE","qname": "sklearn.manifold._t_sne.TSNE","decorators": [],"superclasses": ["BaseEstimator"],"methods": ["sklearn/sklearn.manifold._t_sne/TSNE/__init__"],"is_public": true,"reexported_by": ["sklearn/sklearn.manifold"],"description": "T-distributed Stochastic Neighbor Embedding.\n\nt-SNE [1] is a tool to visualize high-dimensional data. It converts\nsimilarities between data points to joint probabilities and tries\nto minimize the Kullback-Leibler divergence between the joint\nprobabilities of the low-dimensional embedding and the\nhigh-dimensional data. t-SNE has a cost function that is not convex,\ni.e. with different initializations we can get different results.\n\nIt is highly recommended to use another dimensionality reduction\nmethod (e.g. PCA for dense data or TruncatedSVD for sparse data)\nto reduce the number of dimensions to a reasonable amount (e.g. 50)\nif the number of features is very high. This will suppress some\nnoise and speed up the computation of pairwise distances between\nsamples. For more tips see Laurens van der Maaten's FAQ [2].\n\nRead more in the :ref:`User Guide <t_sne>`.","docstring": "T-distributed Stochastic Neighbor Embedding.\n\nt-SNE [1] is a tool to visualize high-dimensional data. It converts\nsimilarities between data points to joint probabilities and tries\nto minimize the Kullback-Leibler divergence between the joint\nprobabilities of the low-dimensional embedding and the\nhigh-dimensional data. t-SNE has a cost function that is not convex,\ni.e. with different initializations we can get different results.\n\nIt is highly recommended to use another dimensionality reduction\nmethod (e.g. PCA for dense data or TruncatedSVD for sparse data)\nto reduce the number of dimensions to a reasonable amount (e.g. 50)\nif the number of features is very high. This will suppress some\nnoise and speed up the computation of pairwise distances between\nsamples. For more tips see Laurens van der Maaten's FAQ [2].\n\nRead more in the :ref:`User Guide <t_sne>`.\n\nParameters\n----------\nn_components : int, default=2\n Dimension of the embedded space.\n\nperplexity : float, default=30.0\n The perplexity is related to the number of nearest neighbors that\n is used in other manifold learning algorithms. Larger datasets\n usually require a larger perplexity. Consider selecting a value\n between 5 and 50. Different values can result in significantly\n different results.\n\nearly_exaggeration : float, default=12.0\n Controls how tight natural clusters in the original space are in\n the embedded space and how much space will be between them. For\n larger values, the space between natural clusters will be larger\n in the embedded space. Again, the choice of this parameter is not\n very critical. If the cost function increases during initial\n optimization, the early exaggeration factor or the learning rate\n might be too high.\n\nlearning_rate : float or 'auto', default=200.0\n The learning rate for t-SNE is usually in the range [10.0, 1000.0]. If\n the learning rate is too high, the data may look like a 'ball' with any\n point approximately equidistant from its nearest neighbours. If the\n learning rate is too low, most points may look compressed in a dense\n cloud with few outliers. If the cost function gets stuck in a bad local\n minimum increasing the learning rate may help.\n Note that many other t-SNE implementations (bhtsne, FIt-SNE, openTSNE,\n etc.) use a definition of learning_rate that is 4 times smaller than\n ours. So our learning_rate=200 corresponds to learning_rate=800 in\n those other implementations. The 'auto' option sets the learning_rate\n to `max(N / early_exaggeration / 4, 50)` where N is the sample size,\n following [4] and [5]. This will become default in 1.2.\n\nn_iter : int, default=1000\n Maximum number of iterations for the optimization. Should be at\n least 250.\n\nn_iter_without_progress : int, default=300\n Maximum number of iterations without progress before we abort the\n optimization, used after 250 initial iterations with early\n exaggeration. Note that progress is only checked every 50 iterations so\n this value is rounded to the next multiple of 50.\n\n .. versionadded:: 0.17\n parameter *n_iter_without_progress* to control stopping criteria.\n\nmin_grad_norm : float, default=1e-7\n If the gradient norm is below this threshold, the optimization will\n be stopped.\n\nmetric : str or callable, default='euclidean'\n The metric to use when calculating distance between instances in a\n feature array. If metric is a string, it must be one of the options\n allowed by scipy.spatial.distance.pdist for its metric parameter, or\n a metric listed in pairwise.PAIRWISE_DISTANCE_FUNCTIONS.\n If metric is \"precomputed\", X is assumed to be a distance matrix.\n Alternatively, if metric is a callable function, it is called on each\n pair of instances (rows) and the resulting value recorded. The callable\n should take two arrays from X as input and return a value indicating\n the distance between them. The default is \"euclidean\" which is\n interpreted as squared euclidean distance.\n\nmetric_params : dict, default=None\n Additional keyword arguments for the metric function.\n\n .. versionadded:: 1.1\n\ninit : {'random', 'pca'} or ndarray of shape (n_samples, n_components), default='random'\n Initialization of embedding. Possible options are 'random', 'pca',\n and a numpy array of shape (n_samples, n_components).\n PCA initialization cannot be used with precomputed distances and is\n usually more globally stable than random initialization. `init='pca'`\n will become default in 1.2.\n\nverbose : int, default=0\n Verbosity level.\n\nrandom_state : int, RandomState instance or None, default=None\n Determines the random number generator. Pass an int for reproducible\n results across multiple function calls. Note that different\n initializations might result in different local minima of the cost\n function. See :term:`Glossary <random_state>`.\n\nmethod : str, default='barnes_hut'\n By default the gradient calculation algorithm uses Barnes-Hut\n approximation running in O(NlogN) time. method='exact'\n will run on the slower, but exact, algorithm in O(N^2) time. The\n exact algorithm should be used when nearest-neighbor errors need\n to be better than 3%. However, the exact method cannot scale to\n millions of examples.\n\n .. versionadded:: 0.17\n Approximate optimization *method* via the Barnes-Hut.\n\nangle : float, default=0.5\n Only used if method='barnes_hut'\n This is the trade-off between speed and accuracy for Barnes-Hut T-SNE.\n 'angle' is the angular size (referred to as theta in [3]) of a distant\n node as measured from a point. If this size is below 'angle' then it is\n used as a summary node of all points contained within it.\n This method is not very sensitive to changes in this parameter\n in the range of 0.2 - 0.8. Angle less than 0.2 has quickly increasing\n computation time and angle greater 0.8 has quickly increasing error.\n\nn_jobs : int, default=None\n The number of parallel jobs to run for neighbors search. This parameter\n has no impact when ``metric=\"precomputed\"`` or\n (``metric=\"euclidean\"`` and ``method=\"exact\"``).\n ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.\n ``-1`` means using all processors. See :term:`Glossary <n_jobs>`\n for more details.\n\n .. versionadded:: 0.22\n\nsquare_distances : True, default='deprecated'\n This parameter has no effect since distance values are always squared\n since 1.1.\n\n .. deprecated:: 1.1\n `square_distances` has no effect from 1.1 and will be removed in\n 1.3.\n\nAttributes\n----------\nembedding_ : array-like of shape (n_samples, n_components)\n Stores the embedding vectors.\n\nkl_divergence_ : float\n Kullback-Leibler divergence after optimization.\n\nn_features_in_ : int\n Number of features seen during :term:`fit`.\n\n .. versionadded:: 0.24\n\nfeature_names_in_ : ndarray of shape (`n_features_in_`,)\n Names of features seen during :term:`fit`. Defined only when `X`\n has feature names that are all strings.\n\n .. versionadded:: 1.0\n\nn_iter_ : int\n Number of iterations run.\n\nSee Also\n--------\nsklearn.decomposition.PCA : Principal component analysis that is a linear\n dimensionality reduction method.\nsklearn.decomposition.KernelPCA : Non-linear dimensionality reduction using\n kernels and PCA.\nMDS : Manifold learning using multidimensional scaling.\nIsomap : Manifold learning based on Isometric Mapping.\nLocallyLinearEmbedding : Manifold learning using Locally Linear Embedding.\nSpectralEmbedding : Spectral embedding for non-linear dimensionality.\n\nReferences\n----------\n\n[1] van der Maaten, L.J.P.; Hinton, G.E. Visualizing High-Dimensional Data\n Using t-SNE. Journal of Machine Learning Research 9:2579-2605, 2008.\n\n[2] van der Maaten, L.J.P. t-Distributed Stochastic Neighbor Embedding\n https://lvdmaaten.github.io/tsne/\n\n[3] L.J.P. van der Maaten. Accelerating t-SNE using Tree-Based Algorithms.\n Journal of Machine Learning Research 15(Oct):3221-3245, 2014.\n https://lvdmaaten.github.io/publications/papers/JMLR_2014.pdf\n\n[4] Belkina, A. C., Ciccolella, C. O., Anno, R., Halpert, R., Spidlen, J.,\n & Snyder-Cappione, J. E. (2019). Automated optimized parameters for\n T-distributed stochastic neighbor embedding improve visualization\n and analysis of large datasets. Nature Communications, 10(1), 1-12.\n\n[5] Kobak, D., & Berens, P. (2019). The art of using t-SNE for single-cell\n transcriptomics. Nature Communications, 10(1), 1-14.\n\nExamples\n--------\n>>> import numpy as np\n>>> from sklearn.manifold import TSNE\n>>> X = np.array([[0, 0, 0], [0, 1, 1], [1, 0, 1], [1, 1, 1]])\n>>> X_embedded = TSNE(n_components=2, learning_rate='auto',\n... init='random').fit_transform(X)\n>>> X_embedded.shape\n(4, 2)"}],"functions": [{"id": "sklearn/sklearn.manifold._t_sne/TSNE/__init__","name": "__init__","qname": "sklearn.manifold._t_sne.TSNE.__init__","decorators": [],"parameters": [{"id": "sklearn/sklearn.manifold._t_sne/TSNE/__init__/n_iter","name": "n_iter","qname": "sklearn.manifold._t_sne.TSNE.__init__.n_iter","default_value": "1000","assigned_by": "NAME_ONLY","is_public": true,"docstring": {"type": "int","description": "Maximum number of iterations for the optimization. Should be at\nleast 250."},"type": {}}],"results": [],"is_public": true,"reexported_by": [],"description": "T-distributed Stochastic Neighbor Embedding.\n\nt-SNE [1] is a tool to visualize high-dimensional data. It converts\nsimilarities between data points to joint probabilities and tries\nto minimize the Kullback-Leibler divergence between the joint\nprobabilities of the low-dimensional embedding and the\nhigh-dimensional data. t-SNE has a cost function that is not convex,\ni.e. with different initializations we can get different results.\n\nIt is highly recommended to use another dimensionality reduction\nmethod (e.g. PCA for dense data or TruncatedSVD for sparse data)\nto reduce the number of dimensions to a reasonable amount (e.g. 50)\nif the number of features is very high. This will suppress some\nnoise and speed up the computation of pairwise distances between\nsamples. For more tips see Laurens van der Maaten's FAQ [2].\n\nRead more in the :ref:`User Guide <t_sne>`.","docstring": ""}]}
Minimal Usage Store (optional)
Minimal Usage Store for `sklearn/sklearn.manifold._t_sne/TSNE/__init__/n_iter`
URL Hash
#/sklearn/sklearn.manifold._t_sne/TSNE/__init__/n_iterExpected Annotation Type
@boundaryExpected Annotation Inputs
Minimal API Data (optional)
Minimal API Data for `sklearn/sklearn.manifold._t_sne/TSNE/__init__/n_iter`
Minimal Usage Store (optional)
Minimal Usage Store for `sklearn/sklearn.manifold._t_sne/TSNE/__init__/n_iter`
Suggested Solution (optional)
No response
Additional Context (optional)
No response