What is the translation of " CROSS-ENTROPY " in Chinese?

Examples of using Cross-entropy in English and their translations into Chinese

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
Cross-entropy loss is defined as.
交叉熵定义为.
Look at the test cross-entropy curve.
看看测试交叉熵曲线。
Cross-Entropy Loss functions are optimized using Gradient Descent.
交叉熵损失函数使用梯度下降进行优化。
We calculate the gradient of cross-entropy loss.
计算加权交叉熵损失.
In this picture, cross-entropy is represented as a function of 2 weights.
在这副图片当中,交叉熵被表示为一个具有两个权重的函数。
Then we can implement the cross-entropy function.
然后我们实现交叉熵功能:.
The cross-entropy is measuring how inefficient our predictions are for describing the truth.
交叉熵是用来衡量我们的预测用于描述真相的低效性。
And now you can compute your cross-entropy in a safe way.
现在,您可以以安全的方式计算交叉熵:.
If we were dealing with a classification outcome,we might use cross-entropy.
如果我们处理分类结果,我们可能会使用交叉熵
In classification trees, we use cross-entropy and Gini index.
在分类树中我们使用交叉熵和基尼指数。
To implement cross-entropy we need to first add a new placeholder to input the correct answers.
为了实现交叉熵,我们需要先添加一个新的占位符来输入正确答案:.
In classification trees, we use cross-entropy and Gini index.
在分类树中,我们使用交叉熵和Gini指数。
We introduce cross-entropy benchmarking to obtain the experimental fidelity of complex multiqubit dynamics.
我们引入了交叉熵基准测试来获得复杂多比特动力学的实验保真度。
Roughly speaking, the idea is that the cross-entropy is a measure of surprise.
大致说来,想法就是:交叉熵是对惊讶的测度。
Entropy, cross-entropy and KL-divergence are often used in machine learning, in particular for training classifiers.
熵,交叉熵和KL-散度经常用于机器学习,特别是用于训练分类器。
Since the true distribution is unknown, cross-entropy cannot be directly calculated.
由于真实分布是未知的,我们不能直接计算交叉熵
The goal of the training is topreserve as much information as possible during this compression(minimize cross-entropy).
训练的目标是在压缩过程中尽可能多地保存信息(最小化交叉熵)。
Logistic regression: model, cross-entropy loss, class probability estimation.
Logistic回归:模型,交叉熵损失,类概率估计。
Sampling from random quantum circuits is an excellent calibration benchmark for quantum computers,which we call cross-entropy benchmarking.
从随机量子线路采样是量子计算机的优秀校正基准,称为交叉熵基准测试。
In this chapter we will mostly use the cross-entropy cost to address the problem of learning slowdown.
在本章中我们主要使用交叉熵代价函数来解决学习速度衰退的问题。
Sampling from random quantum circuits is an excellent calibration benchmark for quantum computers,which we call cross-entropy benchmarking.
从随机量子电路进行采样是量子计算机的一个很好的校准基准,我们称之为交叉熵基准。
Most of the time, we simply use the cross-entropy between the data distribution and the model distribution.
大多数时候,我们简单地使用数据分布和模型分布间的交叉熵
The techniques we will develop in this chapter include: a better choice of cost function,known as the cross-entropy cost function;
我们将要在本章介绍的技术包含:选取更好的代价函数,就是被称为交叉熵代价函数(thecross-entropycostfunction);.
KL-Divergence is functionally similar to multi-class cross-entropy and is also called relative entropy of P with respect to Q-.
KL-散度在功能上类似于多分类交叉熵,并且也称为P相对于Q的相对熵:.
From the training data we can build a model$q(y\vert x;\theta)$ to approximate this conditional,for example using a deep net minimizing cross-entropy or whatever.
从训练数据中我们可以建立一个模型q(y|x;θ)来接近该条件分布,例如使用深度网络最小化交叉熵或其它。
The choice of a loss function(here,"cross-entropy") is explained later.
损失函数(lossfunction,此处为「交叉熵」)的选择稍后会做出解释。
(2003) found that using the cross-entropy error function instead of the sum-of-squares for a classification problem leads to faster training as well as improved generalization.
(2003)发现,对于分类问题,使用交叉熵误差函数的训练速度会比平方和误差函数更快,同时也提升了泛化能力。
In 2004, Zlochin and his colleagues showed that COA-type algorithms could beassimilated methods of stochastic gradient descent, on the cross-entropy and estimation of distribution algorithm.
在2004年,Zlochin和他的同事们表明,COA-type算法在分布算法的叉熵和估计方面能被随机梯度下降法吸收。
Currently, MLPClassifier supports only the Cross-Entropy loss function, which allows probability estimates by running the predict_proba method.
目前,MLPClassifier只支持交叉熵损失函数,通过运行该predict_proba方法可以进行概率估计。
Currently, MLPClassifier supports only the Cross-Entropy loss function, which allows probability estimates by running the predict_proba method.
目前为止,MLPClassifier仅支持交叉熵损失(Cross-Entropyloss)函数,这就允许我们通过运行predict_proba方法进行概率估计了。
Results: 74, Time: 0.0315

Top dictionary queries

English - Chinese