dataclr is a Python library for feature selection, enabling data scientists and ML engineers to identify optimal features from tabular datasets. By combining filter and wrapper methods, it achieves state-of-the-art results, enhancing model performance and simplifying feature engineering.
-
Comprehensive Methods:
-
Filter Methods: Statistical and data-driven approaches like
ANOVA,MutualInformation, andVarianceThreshold.Method Regression Classification ANOVAYes Yes Chi2No Yes CumulativeDistributionFunctionYes Yes CohensDNo Yes CramersVNo Yes DistanceCorrelationYes Yes EntropyYes Yes KendallCorrelationYes Yes KurtosisYes Yes LinearCorrelationYes Yes MaximalInformationCoefficientYes Yes MeanAbsoluteDeviationYes Yes mRMRYes Yes MutualInformationYes Yes SkewnessYes Yes SpearmanCorrelationYes Yes VarianceThresholdYes Yes VarianceInflationFactorYes Yes ZScoreYes Yes -
Wrapper Methods: Model-based iterative methods like
BorutaMethod,ShapMethod, andOptunaMethod.Method Regression Classification BorutaMethodYes Yes HyperoptMethodYes Yes OptunaMethodYes Yes ShapMethodYes Yes Recursive Feature EliminationYes Yes Recursive Feature AdditionYes Yes
-
-
Flexible and Scalable:
- Supports both regression and classification tasks.
- Handles high-dimensional datasets efficiently.
-
Interpretable Results:
- Provides ranked feature lists with detailed importance scores.
- Shows used methods along with their parameters.
-
Seamless Integration:
- Works with popular Python libraries like
pandasandscikit-learn.
- Works with popular Python libraries like
Install dataclr using pip:
pip install dataclrPrepare your dataset as pandas DataFrames or Series and preprocess it (e.g., encode categorical features and normalize numerical values):
import pandas as pd
from sklearn.preprocessing import StandardScaler
# Example dataset
X = pd.DataFrame({...}) # Replace with your feature matrix
y = pd.Series([...]) # Replace with your target variable
# Preprocessing
X_encoded = pd.get_dummies(X) # Encode categorical features
scaler = StandardScaler()
X_normalized = pd.DataFrame(scaler.fit_transform(X_encoded), columns=X_encoded.columns)The FeatureSelector is a high-level API that combines multiple methods to select the best feature subsets:
from sklearn.ensemble import RandomForestClassifier
from dataclr.feature_selection import FeatureSelector
# Define a scikit-learn model
my_model = RandomForestClassifier(n_estimators=100, random_state=42)
# Initialize the FeatureSelector
selector = FeatureSelector(
model=my_model,
metric="accuracy",
X_train=X_train,
X_test=X_test,
y_train=y_train,
y_test=y_test,
)
# Perform feature selection
selected_features = selector.select_features(n_results=5)
print(selected_features)For granular control, you can use individual feature selection methods:
from sklearn.linear_model import LogisticRegression
from dataclr.methods import MutualInformation
# Define a scikit-learn model
my_model = LogisticRegression(solver="liblinear", max_iter=1000)
# Initialize a method
method = MutualInformation(model=my_model, metric="accuracy")
# Fit and transform
results = method.fit_transform(X_train, X_test, y_train, y_test)
print(results)As our algorithm produces multiple results, we selected benchmark results that balance feature count with performance, while being capable of achieving the best performance if needed.
Explore the full documentation for detailed usage instructions, API references, and examples.



