What is the translation of " HYPERPARAMETER " in Chinese?

超参数
一个超参数

Examples of using Hyperparameter in English and their translations into Chinese

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
Learning rate is a key hyperparameter.
学习率是一个关键参数
This is a hyperparameter you may need to adjust.
这是一个你可能需要去调整的超参数
The learning rate is a hyperparameter.
例如,学习速率就是一种超参数
This is a hyperparameter that you will commonly adjust to achieve better results.
这是一个超参数,您通常会调整以获得更好的结果。
Where alpha is another hyperparameter.
这里的alpha是另外一个超参数,.
The learning_rate hyperparameter tells the optimizer how big of a steps it should take.
Learning_rate(学习率)超参数告诉优化器应该采取多大的步伐。
For example, learning rate is a hyperparameter.
例如,学习速率就是一种超参数
The state-of-the-art hyperparameter optimization algorithms.
超参数优化算法(Hyperparameteroptimizationalgorithms).
Simplify the model: regularization, controlled by hyperparameter.
简化模型:正则化,由超参数控制.
What is considered a hyperparameter? on Reddit.
什么是超参数?Reddit-.
The degree of penalization(and thus sparsity)can be adjusted through the hyperparameter alpha.
惩罚的程度(以及由此导致的稀疏性)可以通过参数alpha进行调节。
The second problem for exhaustive hyperparameter search is combinatorial explosion;
第二超参数搜索问题属于组合爆炸;.
The degree of penalization(and thus sparsity)can be adjusted through the hyperparameter alpha.
罚项的度(Thedegreeofpenalization:这里指的是稀疏度:sparsity),可以通过超参数alpha进行调整。
Similarly, many best practices or hyperparameter choices apply exclusively to it.
相似地,许多最佳实践或超参数选择只能应用到NMT领域。
They don't have time to test algorithms under every condition,or the space in articles to document every hyperparameter they tried.
他们没有足够时间在每种条件下都测试算法,或者在文章中记录他们尝试过的每个超参数
They also improved the model with hyperparameter optimization from TFX.
最后,他们又用TFX中的超参数优化进一步改进了模型。
Learning rate is a hyperparameter that controls how much you are adjusting the weights of our network with respect to the loss gradient.
学习率是一个超参数,它控制着我们根据损失梯度调整网络权重的程度。
This is including the time required for hyperparameter search.
这包括超参数搜索所需的时间。
RayTune is a new distributed, hyperparameter search framework for deep learning and RL.
RayTune是一个新的深度学习和RL分布式超参数搜索框架。
This is including the time required for hyperparameter search.
这不包括超参数搜索所需的时间。
If, for example, you're doing hyperparameter optimization, you can easily invoke different parameters with each run.
例如,如果你正在进行超参数优化,那么你可以在每次运行时轻松调用不同的参数。
The real holy grail of AutoDL however,is fully automated hyperparameter tuning, not transfer learning.
但是,自动化深度学习的真正目标是完全自动化的超参数调节,而不是迁移学习。
The size of this list is hyperparameter we can set- basically it would be the length of the longest sentence in our training dataset.
向量列表大小是我们可以设置的超参数--一般是我们训练集中最长句子的长度。
The parameter gamma is considered to be a hyperparameter and may be optimized.
该参数gamma被认为是一个超参数,可以进行优化。
Another hyperparameter for your convolutions is the stride size, defining by how much you want to shift your filter at each step.
你的卷积一个超参数是步幅大小,定义你想要在每一步移动过滤器的程度。
Furthermore, you learned why thelearning rate is it's most important hyperparameter and how you can check if your algorithm learns properly.
此外,你了解了为什么学习率是最重要的超参数,以及如何检查算法是否能够使你正确训练你的模型。
All datasets use a single forward language model, without any ensembling,and the majority of the reported results use the exact same hyperparameter settings.
其中所有数据集都使用单一的正向语言模型,没有任何集成,并且大多数报告结果都使用了完全相同的超参数设置。
Note that the number of topics is a hyperparameter that must be chosen in advance and is not estimated from the data.
值得注意的是,模型的主题数量是一个超参数,必须提前设置而不是从数据中估计。
Keras also has a scikit-learn API, so thatyou can use the Scikit-learn grid search to perform hyperparameter optimization in Keras models.
另外,它还有Scikit-learnAPI,因此你可以利用Scikit-learn网格搜索在Keras模型中执行超参数优化。
Therefore, seeing SGD as a distribution moving over time showed us that learning_rate/batch_sizeis more meaningful than each hyperparameter separated regarding convergence and generalization.
因此,将SGD看作是一个随时间变化的分布表明,在收敛性和泛化方面,learning_rate/batch_size比每个独立的超参数更有意义。
Results: 66, Time: 0.03

Top dictionary queries

English - Chinese