Примеры использования Feature selection на Английском языке и их переводы на Русский язык
{-}
-
Official
-
Colloquial
Afterwards, feature selection can be used in order to prevent overfitting.
Determining a subset of the initial features is called feature selection.
Feature selection algorithms are needed to reduce feature space.
Dimensionality reduction and feature selection can decrease variance by simplifying models.
Feature selection techniques should be distinguished from feature extraction.
This is a survey of the application of feature selection metaheuristics lately used in the literature.
Filter feature selection is a specific case of a more general paradigm called Structure Learning.
For example, a maximum entropy rate criterion may be used for feature selection in machine learning.
The paper surveys feature selection, classification and regression algorithms, evaluation metrics.
Embedded methods are a catch-all group of techniques which perform feature selection as part of the model construction process.
Feature selection approaches try to find a subset of the original variables also called features or attributes.
In traditional regression analysis, the most popular form of feature selection is stepwise regression, which is a wrapper technique.
There are different Feature Selection mechanisms around that utilize mutual information for scoring the different features. .
A learning algorithm takes advantage of its own variable selection process and performs feature selection and classification simultaneously.
Software provides speech feature selection module using greedy add-del and genetic algorithms.
However, the issue of biased predictor selection is avoided by the Conditional Inference approach,a two-stage approach, or adaptive leave-one-out feature selection.
The feature selection methods are typically presented in three classes based on how they combine the selection algorithm and the model building.
Figuring out the most effective representation of your input data(called feature selection) is one of the keys of using machine learning algorithms well.
Feature selection techniques are often used in domains where there are many features and comparatively few samples or data points.
The choice of evaluation metric heavily influences the algorithm, andit is these evaluation metrics which distinguish between the three main categories of feature selection algorithms: wrappers, filters and embedded methods.
Peng et al. proposed a feature selection method that can use either mutual information, correlation, or distance/similarity scores to select features. .
Weka supports several standard data mining tasks, more specifically, data preprocessing,clustering, classification, regression, visualization, and feature selection.
The optimal solution to the filter feature selection problem is the Markov blanket of the target node, and in a Bayesian Network, there is a unique Markov Blanket for each node.
In a typical machine learning application, practitioners must apply the appropriate data pre-processing, feature engineering,feature extraction, and feature selection methods that make the dataset amenable for machine learning.
The size of filter F determines the efficiency of the feature selection in the image and the number of stored network parameters; it, therefore, is one of the most significant characteristics of the architecture;
Archetypal cases for the application of feature selection include the analysis of written texts and DNA microarray data, where there are many thousands of features, and a few tens to hundreds of samples.
The above may then be written as an optimization problem: m R M R max x∈{ 0, 1}n.{\displaystyle\mathrm{mRMR}=\max_{x\in\{ 0,1\}^{ n}}\ left.} The mRMR algorithm is an approximation of the theoretically optimal maximum-dependency feature selection algorithm that maximizes the mutual information between the joint distribution of the selected features and the classification variable.
Feature selection finds the relevant feature set for a specific target variable whereas structure learning finds the relationships between all the variables, usually by expressing these relationships as a graph.
Other available filter metrics include: Class separability Error probability Inter-class distance Probabilistic distance Entropy Consistency-based feature selection Correlation-based feature selection The choice of optimality criteria is difficult as there are multiple objectives in a feature selection task.