Aiblogtech

Unlocking Tomorrow with Aiblogtech Today

L1 and L2 regularization machine learning with python
Machine Learning

How to do L1 and L2 regularization in Python

Machine learning-based L1 and L2 regularization improve the generalizability of models by preventing overfitting. Specific examples of linear regression models that use L2 and L1, respectively, are the Ridge Regression and Lasso Regression. They’re frequently applied to linear regression.

L1 and L2 regularization machine learning with python

L1 Regularization (Lasso Regression):

The penalty term of the loss function is increased by the weights’ absolute values through Lasso Regression. It basically selects features and makes the model easier to understand by making some of the coefficients have exact zero values.
The Lasso Regression loss function can be expressed in the following ways:

Loss = (Sum of squared residuals) + (alpha * Sum of the absolute values of coefficients)

Here, the regularization strength hyperparameter is denoted by “alpha,” which controls the amount of regularization. Higher alpha values result in more coefficients being set to zero.

In Python, you can use the Lasso class from sklearn.linear_model.

from sklearn.linear_model import Lasso

# Create a Lasso Regression model with a specific alpha
lasso_model = Lasso(alpha=0.1)

# Fit the model to your training data
lasso_model.fit(X_train, y_train)

# Make predictions on the test set
y_pred = lasso_model.predict(X_test)

L2 Regularization (Ridge Regression):

Ridge Regression, sometimes referred to as L2 regularization, increases the loss function’s penalty term by adding the squared coefficient values. Penalizing high coefficient values and effectively lowering the impact of minor characteristics help prevent overfitting. The following is an example of how the Ridge Regression loss function is expressed:

Loss = (Sum of squared residuals) + (alpha * Sum of the squared values of coefficients)

Once more, the regularization strength hyperparameter is called “alpha,” and higher values of alpha lead to greater regularization.

from sklearn.linear_model import Ridge

# Create a Ridge Regression model with a specific alpha
ridge_model = Ridge(alpha=1.0)

# Fit the model to your training data
ridge_model.fit(X_train, y_train)

# Make predictions on the test set
y_pred = ridge_model.predict(X_test)

However, both Lasso and Ridge Regression are useful when there is multicollinearity (high correlation between characteristics) or when there are many features but only a few are likely to be significant. Finally, the type of problem and the data will determine which L1 or L2 regularization is best. However, Elastic Net is a type of regularization that combines L1 and L2 regularization under certain conditions. It integrates both penalties.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *