SOFTMAX LAYER 中文是什么意思 - 中文翻译

softmax层
一个softmax层

在 英语 中使用 Softmax layer 的示例及其翻译为 中文

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
At the very end, for classification problems, there is a softmax layer.
如多分类问题,最后会接一个softmax层
It finally has a Softmax layer but none of the new layer types that MPS got.
它终于有了Softmax层,但MPS却没有新的layer类型。
In neural networks,we achieve the same objective using the well-known softmax layer:.
在神经网络中,我们用softmax层也实现了同样的功能:.
But we haven't yet seen how a softmax layer lets us address the learning slowdown problem.
但是我们还没有看到softmax层如何解决学习速率衰退的问题。
Remember that the cross-entropy involves a log, computed on the output of the softmax layer.
请记住,交叉熵涉及在softmax层的输出上计算的日志。
In other words, the output from the softmax layer can be thought of as a probability distribution.
换言之,从softmax层得到的输出可以看做是一个概率分布。
The softmax layer then turns those scores into probabilities(all positive, all add up to 1.0).
接下来的Softmax层便会把那些分数变成概率(都为正数、上限1.0)。
For example, pre-trained network on ImageNet comes with a softmax layer with 1000 categories.
例如,ImageNet上经过预先训练的网络带有1000个类别的softmax层
The final softmax layer then receives this feature vector as input and uses it to classify the sentence;
最后的softmax层以这个特征向量作为输入,用其来对句子做分类;.
In neural networks,we achieve the same objective using the well-known softmax layer:.
在神经网络中,我们通过大家熟知Softmax层来计算相同的目标函数:.
Keep this softmax layer in mind, as many of the subsequent word embedding models will use it in some fashion.
请大家记住这个softmax层,许多后续介绍的词嵌入模型都将或多或少地运用它。
The learning slowdown problem:We have now built up considerable familiarity with softmax layers of neurons.
学习速度衰退的问题:我们现在已经对神经元的softmax层有了一定的认识。
The softmax layer is disregarded as the outputs of the fully connectedlayer become the inputs to another RNN.
最软层(softmaxlayer)被忽略,因为完全连接层的输出将成为另一个RNN的输入。
Calculating the average of all patch and add another softmax layer to produce the probability of each class for the entire image.
计算所有图像块的平均值,并添加另一个softmax层来生成整个图像中每个类的概率。
Using this softmax layer, the model tries to maximize the probability of predicting the correct word at every timestep\( t\).
运用这个softmax层,模型将尝试着在每一时刻t都最大化正确预测下一词的概率。
However, I want to briefly describe another approach to the problem,based on what are called softmax layers of neurons.
不过,我想首先简要的介绍一下解决这个问题的另一种方法,这种方法是基于神经元中所谓的softmax层
Softmax-based approaches are methods that keep the softmax layer intact, but modify its architecture to improve its efficiency.
基于softmax的方法保证softmax层的完好无损,但是修改了它的结构来提升它的效率。
It would also have a softmax layer at the end, but because BNNS doesn't come with a softmax function I left it out.
它最后也本该有一个softmax层,但是因为BNNS没有使用softmax函数,所以我把它去掉了。
In addition, instead of training many different SVM's to classify each object class,there is a single softmax layer that outputs the class probabilities directly.
此外,代替训练许多不同的SVM来对每个目标类进行分类的方法是,有一个单独的softmax层可以直接输出类概率。
Along with the softmax layer, a linear regression layer is also used parallely to output bounding box coordinates for predicted classes.
与softmax层一起,也并行使用线性回归层,以输出预测类的边界框坐标。
If our task is a classification on 10 categories,the new softmax layer of the network will be of 10 categories instead of 1000 categories.
如果我们的任务是10个类别的分类,则网络的新softmax层将是10个类别而不是1000个类别。
Inverting the softmax layer Suppose we have a neural network with a softmax output layer, and the activations$a^L_j$ are known.
问题逆转softmax层假设我们有一个使用softmax输出层的神经网络,然后激活值a_j^L已知。
Sampling-based approaches onthe other hand completely do away with the softmax layer and instead optimise some other loss function that approximates the softmax..
基于采样的方法则是完全去掉softmax层,优化其它目标函数来近似softmax。
The second(and last) layer is a 10-node softmax layer- this returns an array of 10 probability scores that sum to 1.
第二个(也是最后一个)网络层是一个包含10个节点的softmax层-它将返回包含10个概率分数的数组,总和为1。
Later, in Chapter 6, we will sometimes use a softmax output layer, with log-likelihood cost.
后续在第6章中,我们有时会使用softmax输出搭配log-likelihood代价函数。
We add a pooling layer, some fully-connected layers, and finally a softmax classification layer and bounding box regressor.
我们添加一个池化层,一些完全连接,最后添加一个softmax分类层和边界盒回归(boundingboxregressor)。
We add a pooling layer, some fully-connected layers, and finally a softmax classification layer and bounding box regressor.
我们再添加一个池化层、一些全连接以及最后,一个softmax分类层和边界框回归器(boundingboxregressor)。
结果: 27, 时间: 0.0329

单词翻译

顶级字典查询

英语 - 中文