Scikit-Learn : Logistic Regression

Scikit-Learn : Logistic Regression

Logistic regression, despite its name, is a classification algorithm rather than regression algorithm. Based on a given set of independent variables, it is used to estimate discrete value (0 or 1, yes/no, true/false). It is also called logit or MaxEnt Classifier.

Basically, it measures the relationship between the categorical dependent variable and one or more independent variables by estimating the probability of occurrence of an event using its logistics function.

sklearn.linear_model.LogisticRegression is the module used to implement logistic regression.

Parameters

Following table lists the parameters used by Logistic Regression module −

Sr.NoParameter & Description
1penalty − str, ‘L1’, ‘L2’, ‘elasticnet’ or none, optional, default = ‘L2’This parameter is used to specify the norm (L1 or L2) used in penalization (regularization).
2dual − Boolean, optional, default = FalseIt is used for dual or primal formulation whereas dual formulation is only implemented for L2 penalty.
3tol − float, optional, default=1e-4It represents the tolerance for stopping criteria.
4C − float, optional, default=1.0It represents the inverse of regularization strength, which must always be a positive float.
5fit_intercept − Boolean, optional, default = TrueThis parameter specifies that a constant (bias or intercept) should be added to the decision function.
6intercept_scaling − float, optional, default = 1This parameter is useful whenthe solver ‘liblinear’ is usedfit_intercept is set to true
7class_weight − dict or ‘balanced’ optional, default = noneIt represents the weights associated with classes. If we use the default option, it means all the classes are supposed to have weight one. On the other hand, if you choose class_weight: balanced, it will use the values of y to automatically adjust weights.
8random_state − int, RandomState instance or None, optional, default = noneThis parameter represents the seed of the pseudo random number generated which is used while shuffling the data. Followings are the optionsint − in this case, random_state is the seed used by random number generator.RandomState instance − in this case, random_state is the random number generator.None − in this case, the random number generator is the RandonState instance used by np.random.
9solver − str, {‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘saag’, ‘saga’}, optional, default = ‘liblinear’This parameter represents which algorithm to use in the optimization problem. Followings are the properties of options under this parameter −liblinear − It is a good choice for small datasets. It also handles L1 penalty. For multiclass problems, it is limited to one-versus-rest schemes.newton-cg − It handles only L2 penalty.lbfgs − For multiclass problems, it handles multinomial loss. It also handles only L2 penalty.saga − It is a good choice for large datasets. For multiclass problems, it also handles multinomial loss. Along with L1 penalty, it also supports ‘elasticnet’ penalty.sag − It is also used for large datasets. For multiclass problems, it also handles multinomial loss.
10max_iter − int, optional, default = 100As name suggest, it represents the maximum number of iterations taken for solvers to converge.
11multi_class − str, {‘ovr’, ‘multinomial’, ‘auto’}, optional, default = ‘ovr’ovr − For this option, a binary problem is fit for each label.multimonial − For this option, the loss minimized is the multinomial loss fit across the entire probability distribution. We can’t use this option if solver = ‘liblinear’.auto − This option will select ‘ovr’ if solver = ‘liblinear’ or data is binary, else it will choose ‘multinomial’.
12verbose − int, optional, default = 0By default, the value of this parameter is 0 but for liblinear and lbfgs solver we should set verbose to any positive number.
13warm_start − bool, optional, default = falseWith this parameter set to True, we can reuse the solution of the previous call to fit as initialization. If we choose default i.e. false, it will erase the previous solution.
14n_jobs − int or None, optional, default = NoneIf multi_class = ‘ovr’, this parameter represents the number of CPU cores used when parallelizing over classes. It is ignored when solver = ‘liblinear’.
15l1_ratio − float or None, optional, dgtefault = NoneIt is used in case when penalty = ‘elasticnet’. It is basically the Elastic-Net mixing parameter with 0 < = l1_ratio > = 1.

Attributes

Followings table consist the attributes used by Logistic Regression module −

Sr.NoAttributes & Description
1coef_ − array, shape(n_features,) or (n_classes, n_features)It is used to estimate the coefficients of the features in the decision function. When the given problem is binary, it is of the shape (1, n_features).
2Intercept_ − array, shape(1) or (n_classes)It represents the constant, also known as bias, added to the decision function.
3classes_ − array, shape(n_classes)It will provide a list of class labels known to the classifier.
4n_iter_ − array, shape (n_classes) or (1)It returns the actual number of iterations for all the classes.

Implementation Example

Following Python script provides a simple example of implementing logistic regression on iris dataset of scikit-learn −

from sklearn import datasets
from sklearn import linear_model
from sklearn.datasets import load_iris
X, y = load_iris(return_X_y = True)
LRG = linear_model.LogisticRegression(
   random_state = 0,solver = 'liblinear',multi class = 'auto'
)
.fit(X, y)
LRG.score(X, y)

Output

0.96

The output shows that the above Logistic Regression model gave the accuracy of 96 percent.

Next Topic : Click Here

This Post Has 4 Comments

  1. ‏lewdle

    I really like and appreciate your article post.Much thanks again. Much obliged.

Leave a Reply