K-MEANS CLUSTERING 中文是什么意思 - 中文翻译

k-means聚类
K均值聚类

在 英语 中使用 K-means clustering 的示例及其翻译为 中文

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
Unsupervised learning: K-means clustering.
无监督机器学习:K-means聚类.
The k-means clustering concept sounds pretty great, right?
K-means聚类概念听起来很不错,不是吗?
Try to implement simple models such as decision trees and K-means clustering.
尝试实现简单的模型,如决策树和K均值聚类
K-Means Clustering(creating“centres” for each cluster, based on the nearest mean);
K-均值聚类(根据最接近的均值为每个聚类创建“中心”);
All four conditions canbe used as possible termination condition in K-Means clustering:.
这四种条件都可能成为K均值聚类的终止条件:.
Such as regressions, k-means clustering and support vector machines, have been in use for decades.
例如回归、支持向量机、k均值聚类等技术已经被使用了好几十年。
You will want to be comfortable with regression, classification, and k-means clustering models.
您希望对回归,分类和k-means聚类模型感到满意。
We will also cover the k-means clustering algorithm and see how Gaussian Mixture Models improve on it.
我们还将讨论K-means聚类算法,看看高斯混合模型是如何改进它的.
We have talked about regression(both linear and logistic), decision trees,and finally, k-means clustering.
我们已经谈到回归(线性和逻辑)、决策树,以及最后的K-均值聚类。
For unsupervised learning one can use k-means clustering and affinity propagation.
对于无监督学习,它提供K-means和affinitypropagation聚类算法。
The K-Means Clustering is an effective method for finding a good fit of clusters for your data.
K均值聚类是一种有效的方法,可以为你的数据找到一个良好的聚类方式。
For unsupervised learning, milk supports k-means clustering and affinity propagation.
对于无监督学习,它提供K-means和affinitypropagation聚类算法。
Don't worry if you're not an artificial intelligence expert-I won't ever mention Linear Regression and K-Means Clustering again.
如果你不是一个人工智能专家,不要担心,我不会提及线性回归和k-均值聚类。
The obvious disadvantage of k-means clustering is that you need to assume in advance how many clusters you will have.
K-均值聚类的一个明显的缺点是你必须提供关于你期望找到多少个聚类的先验假设。
There are many clustering algorithms for doing clustering, but k-means clustering may be the most common.
做聚类的聚类算法有很多,但k均值聚类可能是最常见的。
Classical k-means clustering and the in-memory computing approach agreed on the classification of 245 out of the 270 weather stations.
经典k-平均聚类和内存计算方法在270个气象站中的245个分类中达成一致。
Explain what a neighborhood optimum is andthe reason it is important in a particular context, like k-means clustering.
解释什么是局部优化(localoptimum)以及为什么它在特定情况,如K均值聚类,是很重要的。
K-Nearest Neighbors is a supervised classification algorithm, while k-means clustering is an unsupervised clustering algorithm.
K-NearestNeighbors是一种监督分类算法,而k-means聚类是一种无监督的聚类算法。
Similarly, the in-memory computing approach classified 13 stations ascorrelated that had been marked uncorrelated by k-means clustering.
类似地,内存计算方法将13个站分类为已被标记为与k-means聚类不相关的相关站。
Using the very fast andintuitive k-means algorithm(see In Depth: K-Means Clustering), we find the clusters shown in the following figure:.
使用非常快速和直观的k-means算法(参见K均值聚类),我们发现如下图所示的簇:.
Taken Intro to Machine learning and have understanding of common supervised learning and unsupervised learning algorithms,such as SVM and k-means clustering.
机器学习入门知识,并了解常见监督式学习和非监督式学习算法,例如SVM和K平均算法.
Algorithms such as K-Means clustering work by randomly assigning initial“proposed” centroids, then reassigning each data point to its closest centroid.
诸如K-Means聚类的算法通过随机分配初始“建议”质心,然后将每个数据点重新分配到其最接近的质心来工作。
These incorporate the most common algorithms used by data scientists:linear models, k-means clustering, decision trees and so on.
这其中包含了一些数据科学家最常用的算法:线性模型、k均值聚类、决策树等等。
K-Means clustering is used for finding similarities between data points and categorizing them into a number of different groups, K being the number of groups.
K-Means聚类算法用于查找数据点之间的相似性并将它们分类为多个不同的组别,K是组别的数量。
In those libraries you can find logistic regression, k-means clustering, decisions trees, k-nearest neighbours, principal component analysis and naive bayes for JavaScript.
在这些库中,您可以找到逻辑回归、k-means集群、决策树、k近邻、主组件分析和JavaScript的naivebayes。
To demonstrate the technology, the authors chose two time-based examples andcompared their results with traditional machine-learning methods such as k-means clustering:.
为了展示这项技术,论文作者选取了两个基于时间的例子,并且将该技术得出的结果与传统机器学习方法(k均值聚类)得出的结果进行了比较:.
However, k-means clustering tends to find clusters of comparable spatial extent, while the expectation-maximization mechanism allows clusters to have different shapes.
然而,k-means聚类倾向于找到具有可比空间范围的聚类,而期望最大化机制允许聚类具有不同的形状。
To demonstrate the technology, the authors chose two time-based examples andcompared their results with traditional machine-learning methods such as k-means clustering:.
为了演示该技术,作者选择了两个基于时间的示例,并将其结果与传统的机器学习方法进行了比较,例如k-meansclustering:.
However, k-means clustering tends to find clusters of comparable spatial extent, while the expectation-maximization mechanism allows clusters to have different shapes.
然而k-平均聚类倾向于在可比较的空间范围内寻找聚类,期望-最大化技术却允许聚类有不同的形状。
K-means clustering algorithm Fuzzy clustering algorithm Gaussian(Expectation Maximization)clustering algorithm Clustering Methods[6] C-means Clustering Algorithm[7] Connected-component labeling.
K-均值聚类算法模糊聚类算法高斯(期望最大化)聚类算法聚类方法[1]C-均值聚类算法[2]连通分量标记.
结果: 84, 时间: 0.0385

单词翻译

顶级字典查询

英语 - 中文