desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Predict confidence scores for samples. The confidence score for a sample is the signed distance of that sample to the hyperplane. Parameters X : {array-like, sparse matrix}, shape = (n_samples, n_features) Samples. Returns array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (s...
def decision_function(self, X):
if ((not hasattr(self, 'coef_')) or (self.coef_ is None)): raise NotFittedError(('This %(name)s instance is not fitted yet' % {'name': type(self).__name__})) X = check_array(X, accept_sparse='csr') n_features = self.coef_.shape[1] if (X.shape[1] != n_features): raise Va...
'Predict class labels for samples in X. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Samples. Returns C : array, shape = [n_samples] Predicted class label per sample.'
def predict(self, X):
scores = self.decision_function(X) if (len(scores.shape) == 1): indices = (scores > 0).astype(np.int) else: indices = scores.argmax(axis=1) return self.classes_[indices]
'Probability estimation for OvR logistic regression. Positive class probabilities are computed as 1. / (1. + np.exp(-self.decision_function(X))); multiclass is handled by normalizing that over all classes.'
def _predict_proba_lr(self, X):
prob = self.decision_function(X) prob *= (-1) np.exp(prob, prob) prob += 1 np.reciprocal(prob, prob) if (prob.ndim == 1): return np.vstack([(1 - prob), prob]).T else: prob /= prob.sum(axis=1).reshape((prob.shape[0], (-1))) return prob
'Convert coefficient matrix to dense array format. Converts the ``coef_`` member (back) to a numpy.ndarray. This is the default format of ``coef_`` and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns self : estimator'
def densify(self):
msg = 'Estimator, %(name)s, must be fitted before densifying.' check_is_fitted(self, 'coef_', msg=msg) if sp.issparse(self.coef_): self.coef_ = self.coef_.toarray() return self
'Convert coefficient matrix to sparse format. Converts the ``coef_`` member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The ``intercept_`` member is not converted. Notes For non-sparse models, i.e. when there are n...
def sparsify(self):
msg = 'Estimator, %(name)s, must be fitted before sparsifying.' check_is_fitted(self, 'coef_', msg=msg) self.coef_ = sp.csr_matrix(self.coef_) return self
'Fit linear model. Parameters X : numpy array or sparse matrix of shape [n_samples,n_features] Training data y : numpy array of shape [n_samples, n_targets] Target values. Will be cast to X\'s dtype if necessary sample_weight : numpy array of shape [n_samples] Individual weights for each sample .. versionadded:: 0.17 p...
def fit(self, X, y, sample_weight=None):
n_jobs_ = self.n_jobs (X, y) = check_X_y(X, y, accept_sparse=['csr', 'csc', 'coo'], y_numeric=True, multi_output=True) if ((sample_weight is not None) and (np.atleast_1d(sample_weight).ndim > 1)): raise ValueError('Sample weights must be 1D array or scalar') (X, y, X_offset,...
'Fit the model using X, y as training data. Parameters X : array-like, shape = [n_samples, n_features] Training data. y : array-like, shape = [n_samples] Target values. Will be cast to X\'s dtype if necessary Returns self : object Returns an instance of self.'
def fit(self, X, y):
(X, y) = check_X_y(X, y, ['csr', 'csc'], y_numeric=True, ensure_min_samples=2, estimator=self) X = as_float_array(X, copy=False) (n_samples, n_features) = X.shape (X, y, X_offset, y_offset, X_scale) = self._preprocess_data(X, y, self.fit_intercept, self.normalize) (estimator_func, params) = self._ma...
'Return the parameters passed to the estimator'
def _make_estimator_and_params(self, X, y):
raise NotImplementedError
'Get the boolean mask indicating which features are selected. Returns support : boolean array of shape [# input features] An element is True iff its corresponding feature is selected for retention.'
def _get_support_mask(self):
check_is_fitted(self, 'scores_') return (self.scores_ > self.selection_threshold)
'Center the data in X but not in y'
def _preprocess_data(self, X, y, fit_intercept, normalize=False):
(X, _, X_offset, _, X_scale) = _preprocess_data(X, y, fit_intercept, normalize=normalize) return (X, y, X_offset, y, X_scale)
'Fit the model using X, y as training data. Parameters X : array-like, shape (n_samples, n_features) Training data. y : array-like, shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X\'s dtype if necessary Returns self : object returns an instance of self.'
def fit(self, X, y):
(X, y) = check_X_y(X, y, multi_output=True, y_numeric=True) n_features = X.shape[1] (X, y, X_offset, y_offset, X_scale, Gram, Xy) = _pre_fit(X, y, None, self.precompute, self.normalize, self.fit_intercept, copy=True) if (y.ndim == 1): y = y[:, np.newaxis] if ((self.n_nonzero_coefs is None) a...
'Fit the model using X, y as training data. Parameters X : array-like, shape [n_samples, n_features] Training data. y : array-like, shape [n_samples] Target values. Will be cast to X\'s dtype if necessary Returns self : object returns an instance of self.'
def fit(self, X, y):
(X, y) = check_X_y(X, y, y_numeric=True, ensure_min_features=2, estimator=self) X = as_float_array(X, copy=False, force_all_finite=False) cv = check_cv(self.cv, classifier=False) max_iter = (min(max(int((0.1 * X.shape[1])), 5), X.shape[1]) if (not self.max_iter) else self.max_iter) cv_paths = Parall...
'Fit the model according to the given training data. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape (n_samples,) Target vector relative to X. sample_weight : array-like, s...
def fit(self, X, y, sample_weight=None):
if ((not isinstance(self.C, numbers.Number)) or (self.C < 0)): raise ValueError(('Penalty term must be positive; got (C=%r)' % self.C)) if ((not isinstance(self.max_iter, numbers.Number)) or (self.max_iter < 0)): raise ValueError(('Maximum number of iteration must ...
'Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be "multinomial" the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each cl...
def predict_proba(self, X):
if (not hasattr(self, 'coef_')): raise NotFittedError('Call fit before prediction') calculate_ovr = ((self.coef_.shape[0] == 1) or (self.multi_class == 'ovr')) if calculate_ovr: return super(LogisticRegression, self)._predict_proba_lr(X) else: return softmax(self.decisio...
'Log of probability estimates. The returned estimates for all classes are ordered by the label of classes. Parameters X : array-like, shape = [n_samples, n_features] Returns T : array-like, shape = [n_samples, n_classes] Returns the log-probability of the sample for each class in the model, where classes are ordered as...
def predict_log_proba(self, X):
return np.log(self.predict_proba(X))
'Fit the model according to the given training data. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape (n_samples,) Target vector relative to X. sample_weight : array-like, s...
def fit(self, X, y, sample_weight=None):
_check_solver_option(self.solver, self.multi_class, self.penalty, self.dual) if ((not isinstance(self.max_iter, numbers.Number)) or (self.max_iter < 0)): raise ValueError(('Maximum number of iteration must be positive; got (max_iter=%r)' % self.max_iter)) if ((not isinstance(...
'Fit the model Parameters X : numpy array of shape [n_samples,n_features] Training data y : numpy array of shape [n_samples] Target values. Will be cast to X\'s dtype if necessary Returns self : returns an instance of self.'
def fit(self, X, y):
(X, y) = check_X_y(X, y, dtype=np.float64, y_numeric=True) (X, y, X_offset_, y_offset_, X_scale_) = self._preprocess_data(X, y, self.fit_intercept, self.normalize, self.copy_X) self.X_offset_ = X_offset_ self.X_scale_ = X_scale_ (n_samples, n_features) = X.shape alpha_ = (1.0 / np.var(y)) la...
'Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters X : {array-like, sparse matrix}, shape = (n_samples, n_features) Samples. return_std : boolean, optional Whether to return the standard deviation of posterior prediction. Retu...
def predict(self, X, return_std=False):
y_mean = self._decision_function(X) if (return_std is False): return y_mean else: if self.normalize: X = ((X - self.X_offset_) / self.X_scale_) sigmas_squared_data = (np.dot(X, self.sigma_) * X).sum(axis=1) y_std = np.sqrt((sigmas_squared_data + (1.0 / self.alpha_...
'Fit the ARDRegression model according to the given training data and parameters. Iterative procedure to maximize the evidence Parameters X : array-like, shape = [n_samples, n_features] Training vector, where n_samples in the number of samples and n_features is the number of features. y : array, shape = [n_samples] Tar...
def fit(self, X, y):
(X, y) = check_X_y(X, y, dtype=np.float64, y_numeric=True) (n_samples, n_features) = X.shape coef_ = np.zeros(n_features) (X, y, X_offset_, y_offset_, X_scale_) = self._preprocess_data(X, y, self.fit_intercept, self.normalize, self.copy_X) keep_lambda = np.ones(n_features, dtype=bool) lambda_1 =...
'Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters X : {array-like, sparse matrix}, shape = (n_samples, n_features) Samples. return_std : boolean, optional Whether to return the standard deviation of posterior prediction. Retu...
def predict(self, X, return_std=False):
y_mean = self._decision_function(X) if (return_std is False): return y_mean else: if self.normalize: X = ((X - self.X_offset_) / self.X_scale_) X = X[:, (self.lambda_ < self.threshold_lambda)] sigmas_squared_data = (np.dot(X, self.sigma_) * X).sum(axis=1) ...
'Validate input params.'
def _validate_params(self):
if (not isinstance(self.shuffle, bool)): raise ValueError('shuffle must be either True or False') if (self.max_iter <= 0): raise ValueError(('max_iter must be > zero. Got %f' % self.max_iter)) if (not (0.0 <= self.l1_ratio <= 1.0)): raise ValueErro...
'Get concrete ``LossFunction`` object for str ``loss``.'
def _get_loss_function(self, loss):
try: loss_ = self.loss_functions[loss] (loss_class, args) = (loss_[0], loss_[1:]) if (loss in ('huber', 'epsilon_insensitive', 'squared_epsilon_insensitive')): args = (self.epsilon,) return loss_class(*args) except KeyError: raise ValueError(('The loss %...
'Set the sample weight array.'
def _validate_sample_weight(self, sample_weight, n_samples):
if (sample_weight is None): sample_weight = np.ones(n_samples, dtype=np.float64, order='C') else: sample_weight = np.asarray(sample_weight, dtype=np.float64, order='C') if (sample_weight.shape[0] != n_samples): raise ValueError('Shapes of X and sample_weight do not ...
'Allocate mem for parameters; initialize if provided.'
def _allocate_parameter_mem(self, n_classes, n_features, coef_init=None, intercept_init=None):
if (n_classes > 2): if (coef_init is not None): coef_init = np.asarray(coef_init, order='C') if (coef_init.shape != (n_classes, n_features)): raise ValueError('Provided ``coef_`` does not match dataset. ') self.coef_ = coef_init e...
'Fit a binary classifier on X and y.'
def _fit_binary(self, X, y, alpha, C, sample_weight, learning_rate, max_iter):
(coef, intercept, n_iter_) = fit_binary(self, 1, X, y, alpha, C, learning_rate, max_iter, self._expanded_class_weight[1], self._expanded_class_weight[0], sample_weight) self.t_ += (n_iter_ * X.shape[0]) self.n_iter_ = n_iter_ if (self.average > 0): if (self.average <= (self.t_ - 1)): ...
'Fit a multi-class classifier by combining binary classifiers Each binary classifier predicts one class versus all others. This strategy is called OVA: One Versus All.'
def _fit_multiclass(self, X, y, alpha, C, learning_rate, sample_weight, max_iter):
result = Parallel(n_jobs=self.n_jobs, backend='threading', verbose=self.verbose)((delayed(fit_binary)(self, i, X, y, alpha, C, learning_rate, max_iter, self._expanded_class_weight[i], 1.0, sample_weight) for i in range(len(self.classes_)))) n_iter_ = 0.0 for (i, (_, intercept, n_iter_i)) in enumerate(result...
'Fit linear model with Stochastic Gradient Descent. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Subset of the training data y : numpy array, shape (n_samples,) Subset of the target values classes : array, shape (n_classes,) Classes across all calls to partial_fit. Can be obtained by via `n...
def partial_fit(self, X, y, classes=None, sample_weight=None):
if (self.class_weight in ['balanced']): raise ValueError("class_weight '{0}' is not supported for partial_fit. In order to use 'balanced' weights, use compute_class_weight('{0}', classes, y). In place of y you can us a large enoug...
'Fit linear model with Stochastic Gradient Descent. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Training data y : numpy array, shape (n_samples,) Target values coef_init : array, shape (n_classes, n_features) The initial coefficients to warm-start the optimization. intercept_init : array, ...
def fit(self, X, y, coef_init=None, intercept_init=None, sample_weight=None):
return self._fit(X, y, alpha=self.alpha, C=1.0, loss=self.loss, learning_rate=self.learning_rate, coef_init=coef_init, intercept_init=intercept_init, sample_weight=sample_weight)
'Probability estimates. This method is only available for log loss and modified Huber loss. Multiclass probability estimates are derived from binary (one-vs.-rest) estimates by simple normalization, as recommended by Zadrozny and Elkan. Binary probability estimates for loss="modified_huber" are given by (clip(decision_...
@property def predict_proba(self):
self._check_proba() return self._predict_proba
'Log of probability estimates. This method is only available for log loss and modified Huber loss. When loss="modified_huber", probability estimates may be hard zeros and ones, so taking the logarithm is not possible. See ``predict_proba`` for details. Parameters X : array-like, shape (n_samples, n_features) Returns T ...
@property def predict_log_proba(self):
self._check_proba() return self._predict_log_proba
'Fit linear model with Stochastic Gradient Descent. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Subset of training data y : numpy array of shape (n_samples,) Subset of target values sample_weight : array-like, shape (n_samples,), optional Weights applied to individual samples. If not provi...
def partial_fit(self, X, y, sample_weight=None):
return self._partial_fit(X, y, self.alpha, C=1.0, loss=self.loss, learning_rate=self.learning_rate, max_iter=1, sample_weight=sample_weight, coef_init=None, intercept_init=None)
'Fit linear model with Stochastic Gradient Descent. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Training data y : numpy array, shape (n_samples,) Target values coef_init : array, shape (n_features,) The initial coefficients to warm-start the optimization. intercept_init : array, shape (1,)...
def fit(self, X, y, coef_init=None, intercept_init=None, sample_weight=None):
return self._fit(X, y, alpha=self.alpha, C=1.0, loss=self.loss, learning_rate=self.learning_rate, coef_init=coef_init, intercept_init=intercept_init, sample_weight=sample_weight)
'Predict using the linear model Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Returns array, shape (n_samples,) Predicted target values per element in X.'
def _decision_function(self, X):
check_is_fitted(self, ['t_', 'coef_', 'intercept_'], all_or_any=all) X = check_array(X, accept_sparse='csr') scores = (safe_sparse_dot(X, self.coef_.T, dense_output=True) + self.intercept_) return scores.ravel()
'Predict using the linear model Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Returns array, shape (n_samples,) Predicted target values per element in X.'
def predict(self, X):
return self._decision_function(X)
'Fit linear model. Parameters X : numpy array of shape [n_samples, n_features] Training data y : numpy array of shape [n_samples] Target values Returns self : returns an instance of self.'
def fit(self, X, y):
random_state = check_random_state(self.random_state) (X, y) = check_X_y(X, y, y_numeric=True) (n_samples, n_features) = X.shape (n_subsamples, self.n_subpopulation_) = self._check_subparams(n_samples, n_features) self.breakdown_ = _breakdown_point(n_samples, n_subsamples) if self.verbose: ...
'Fit estimator using RANSAC algorithm. Parameters X : array-like or sparse matrix, shape [n_samples, n_features] Training data. y : array-like, shape = [n_samples] or [n_samples, n_targets] Target values. sample_weight : array-like, shape = [n_samples] Individual weights for each sample raises error if sample_weight is...
def fit(self, X, y, sample_weight=None):
X = check_array(X, accept_sparse='csr') y = check_array(y, ensure_2d=False) check_consistent_length(X, y) if (self.base_estimator is not None): base_estimator = clone(self.base_estimator) else: base_estimator = LinearRegression() if (self.min_samples is None): min_samples...
'Predict using the estimated model. This is a wrapper for `estimator_.predict(X)`. Parameters X : numpy array of shape [n_samples, n_features] Returns y : array, shape = [n_samples] or [n_samples, n_targets] Returns predicted values.'
def predict(self, X):
check_is_fitted(self, 'estimator_') return self.estimator_.predict(X)
'Returns the score of the prediction. This is a wrapper for `estimator_.score(X, y)`. Parameters X : numpy array or sparse matrix of shape [n_samples, n_features] Training data. y : array, shape = [n_samples] or [n_samples, n_targets] Target values. Returns z : float Score of the prediction.'
def score(self, X, y):
check_is_fitted(self, 'estimator_') return self.estimator_.score(X, y)
'Auxiliary method to fit the model using X, y as training data'
def _fit(self, X, y, max_iter, alpha, fit_path, Xy=None):
n_features = X.shape[1] (X, y, X_offset, y_offset, X_scale) = self._preprocess_data(X, y, self.fit_intercept, self.normalize, self.copy_X) if (y.ndim == 1): y = y[:, np.newaxis] n_targets = y.shape[1] Gram = self._get_gram(self.precompute, X, y) self.alphas_ = [] self.n_iter_ = [] ...
'Fit the model using X, y as training data. Parameters X : array-like, shape (n_samples, n_features) Training data. y : array-like, shape (n_samples,) or (n_samples, n_targets) Target values. Xy : array-like, shape (n_samples,) or (n_samples, n_targets), optional Xy = np.dot(X.T, y) that can be precompu...
def fit(self, X, y, Xy=None):
(X, y) = check_X_y(X, y, y_numeric=True, multi_output=True) alpha = getattr(self, 'alpha', 0.0) if hasattr(self, 'n_nonzero_coefs'): alpha = 0.0 max_iter = self.n_nonzero_coefs else: max_iter = self.max_iter self._fit(X, y, max_iter=max_iter, alpha=alpha, fit_path=self.fit_pa...
'Fit the model using X, y as training data. Parameters X : array-like, shape (n_samples, n_features) Training data. y : array-like, shape (n_samples,) Target values. Returns self : object returns an instance of self.'
def fit(self, X, y):
(X, y) = check_X_y(X, y, y_numeric=True) X = as_float_array(X, copy=self.copy_X) y = as_float_array(y, copy=self.copy_X) cv = check_cv(self.cv, classifier=False) Gram = self.precompute if hasattr(Gram, '__array__'): warnings.warn(("Parameter 'precompute' cannot be an array...
'Fit the model using X, y as training data. Parameters X : array-like, shape (n_samples, n_features) training data. y : array-like, shape (n_samples,) target values. Will be cast to X\'s dtype if necessary copy_X : boolean, optional, default True If ``True``, X will be copied; else, it may be overwritten. Returns self ...
def fit(self, X, y, copy_X=True):
(X, y) = check_X_y(X, y, y_numeric=True) (X, y, Xmean, ymean, Xstd) = LinearModel._preprocess_data(X, y, self.fit_intercept, self.normalize, self.copy_X) max_iter = self.max_iter Gram = self.precompute (alphas_, active_, coef_path_, self.n_iter_) = lars_path(X, y, Gram=Gram, copy_X=copy_X, copy_Gram...
'Fit linear model with Passive Aggressive algorithm. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Subset of the training data y : numpy array of shape [n_samples] Subset of the target values classes : array, shape = [n_classes] Classes across all calls to partial_fit. Can be obtained by v...
def partial_fit(self, X, y, classes=None):
if (self.class_weight == 'balanced'): raise ValueError("class_weight 'balanced' is not supported for partial_fit. For 'balanced' weights, use `sklearn.utils.compute_class_weight` with `class_weight='balanced'`. In place of y you can use a lar...
'Fit linear model with Passive Aggressive algorithm. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training data y : numpy array of shape [n_samples] Target values coef_init : array, shape = [n_classes,n_features] The initial coefficients to warm-start the optimization. intercept_init : ar...
def fit(self, X, y, coef_init=None, intercept_init=None):
lr = ('pa1' if (self.loss == 'hinge') else 'pa2') return self._fit(X, y, alpha=1.0, C=self.C, loss='hinge', learning_rate=lr, coef_init=coef_init, intercept_init=intercept_init)
'Fit linear model with Passive Aggressive algorithm. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Subset of training data y : numpy array of shape [n_samples] Subset of target values Returns self : returns an instance of self.'
def partial_fit(self, X, y):
lr = ('pa1' if (self.loss == 'epsilon_insensitive') else 'pa2') return self._partial_fit(X, y, alpha=1.0, C=self.C, loss='epsilon_insensitive', learning_rate=lr, max_iter=1, sample_weight=None, coef_init=None, intercept_init=None)
'Fit linear model with Passive Aggressive algorithm. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training data y : numpy array of shape [n_samples] Target values coef_init : array, shape = [n_features] The initial coefficients to warm-start the optimization. intercept_init : array, shape...
def fit(self, X, y, coef_init=None, intercept_init=None):
lr = ('pa1' if (self.loss == 'epsilon_insensitive') else 'pa2') return self._fit(X, y, alpha=1.0, C=self.C, loss='epsilon_insensitive', learning_rate=lr, coef_init=coef_init, intercept_init=intercept_init)
'Fit the model according to the given training data. Parameters X : array-like, shape (n_samples, n_features) Training vector, where n_samples in the number of samples and n_features is the number of features. y : array-like, shape (n_samples,) Target vector relative to X. sample_weight : array-like, shape (n_samples,)...
def fit(self, X, y, sample_weight=None):
(X, y) = check_X_y(X, y, copy=False, accept_sparse=['csr'], y_numeric=True) if (sample_weight is not None): sample_weight = np.array(sample_weight) check_consistent_length(y, sample_weight) else: sample_weight = np.ones_like(y) if (self.epsilon < 1.0): raise ValueError(('...
'Learn a list of feature name -> indices mappings. Parameters X : Mapping or iterable over Mappings Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype). y : (ignored) Returns self'
def fit(self, X, y=None):
feature_names = [] vocab = {} for x in X: for (f, v) in six.iteritems(x): if isinstance(v, six.string_types): f = ('%s%s%s' % (f, self.separator, v)) if (f not in vocab): feature_names.append(f) vocab[f] = len(vocab) if self...
'Learn a list of feature name -> indices mappings and transform X. Like fit(X) followed by transform(X), but does not require materializing X in memory. Parameters X : Mapping or iterable over Mappings Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertible to dtype...
def fit_transform(self, X, y=None):
return self._transform(X, fitting=True)
'Transform array or sparse matrix X back to feature mappings. X must have been produced by this DictVectorizer\'s transform or fit_transform method; it may only have passed through transformers that preserve the number of features and their order. In the case of one-hot/one-of-K coding, the constructed feature names an...
def inverse_transform(self, X, dict_type=dict):
X = check_array(X, accept_sparse=['csr', 'csc']) n_samples = X.shape[0] names = self.feature_names_ dicts = [dict_type() for _ in xrange(n_samples)] if sp.issparse(X): for (i, j) in zip(*X.nonzero()): dicts[i][names[j]] = X[(i, j)] else: for (i, d) in enumerate(dicts)...
'Transform feature->value dicts to array or sparse matrix. Named features not encountered during fit or fit_transform will be silently ignored. Parameters X : Mapping or iterable over Mappings, length = n_samples Dict(s) or Mapping(s) from feature names (arbitrary Python objects) to feature values (strings or convertib...
def transform(self, X):
if self.sparse: return self._transform(X, fitting=False) else: dtype = self.dtype vocab = self.vocabulary_ X = _tosequence(X) Xa = np.zeros((len(X), len(vocab)), dtype=dtype) for (i, x) in enumerate(X): for (f, v) in six.iteritems(x): i...
'Returns a list of feature names, ordered by their indices. If one-of-K coding is applied to categorical features, this will include the constructed feature names but not the original ones.'
def get_feature_names(self):
return self.feature_names_
'Restrict the features to those in support using feature selection. This function modifies the estimator in-place. Parameters support : array-like Boolean mask or list of indices (as returned by the get_support member of feature selectors). indices : boolean, optional Whether support is a list of indices. Returns self ...
def restrict(self, support, indices=False):
if (not indices): support = np.where(support)[0] names = self.feature_names_ new_vocab = {} for i in support: new_vocab[names[i]] = len(new_vocab) self.vocabulary_ = new_vocab self.feature_names_ = [f for (f, i) in sorted(six.iteritems(new_vocab), key=itemgetter(1))] return s...
'No-op. This method doesn\'t do anything. It exists purely for compatibility with the scikit-learn transformer API. Parameters X : array-like Returns self : FeatureHasher'
def fit(self, X=None, y=None):
self._validate_params(self.n_features, self.input_type) return self
'Transform a sequence of instances to a scipy.sparse matrix. Parameters raw_X : iterable over iterable over raw features, length = n_samples Samples. Each sample must be iterable an (e.g., a list or tuple) containing/generating feature names (and optionally values, see the input_type constructor argument) which will be...
def transform(self, raw_X):
raw_X = iter(raw_X) if (self.input_type == 'dict'): raw_X = (_iteritems(d) for d in raw_X) elif (self.input_type == 'string'): raw_X = (((f, 1) for f in x) for x in raw_X) (indices, indptr, values) = _hashing.transform(raw_X, self.n_features, self.dtype, self.alternate_sign) n_sample...
'Decode the input into a string of unicode symbols The decoding strategy depends on the vectorizer parameters.'
def decode(self, doc):
if (self.input == u'filename'): with open(doc, u'rb') as fh: doc = fh.read() elif (self.input == u'file'): doc = doc.read() if isinstance(doc, bytes): doc = doc.decode(self.encoding, self.decode_error) if (doc is np.nan): raise ValueError(u'np.nan is an ...
'Turn tokens into a sequence of n-grams after stop words filtering'
def _word_ngrams(self, tokens, stop_words=None):
if (stop_words is not None): tokens = [w for w in tokens if (w not in stop_words)] (min_n, max_n) = self.ngram_range if (max_n != 1): original_tokens = tokens if (min_n == 1): tokens = list(original_tokens) min_n += 1 else: tokens = [] ...
'Tokenize text_document into a sequence of character n-grams'
def _char_ngrams(self, text_document):
text_document = self._white_spaces.sub(u' ', text_document) text_len = len(text_document) (min_n, max_n) = self.ngram_range if (min_n == 1): ngrams = list(text_document) min_n += 1 else: ngrams = [] ngrams_append = ngrams.append for n in xrange(min_n, min((max_n + ...
'Whitespace sensitive char-n-gram tokenization. Tokenize text_document into a sequence of character n-grams operating only inside word boundaries. n-grams at the edges of words are padded with space.'
def _char_wb_ngrams(self, text_document):
text_document = self._white_spaces.sub(u' ', text_document) (min_n, max_n) = self.ngram_range ngrams = [] ngrams_append = ngrams.append for w in text_document.split(): w = ((u' ' + w) + u' ') w_len = len(w) for n in xrange(min_n, (max_n + 1)): offset = 0 ...
'Return a function to preprocess the text before tokenization'
def build_preprocessor(self):
if (self.preprocessor is not None): return self.preprocessor noop = (lambda x: x) if (not self.strip_accents): strip_accents = noop elif callable(self.strip_accents): strip_accents = self.strip_accents elif (self.strip_accents == u'ascii'): strip_accents = strip_accen...
'Return a function that splits a string into a sequence of tokens'
def build_tokenizer(self):
if (self.tokenizer is not None): return self.tokenizer token_pattern = re.compile(self.token_pattern) return (lambda doc: token_pattern.findall(doc))
'Build or fetch the effective stop words list'
def get_stop_words(self):
return _check_stop_list(self.stop_words)
'Return a callable that handles preprocessing and tokenization'
def build_analyzer(self):
if callable(self.analyzer): return self.analyzer preprocess = self.build_preprocessor() if (self.analyzer == u'char'): return (lambda doc: self._char_ngrams(preprocess(self.decode(doc)))) elif (self.analyzer == u'char_wb'): return (lambda doc: self._char_wb_ngrams(preprocess(self...
'Check if vocabulary is empty or missing (not fit-ed)'
def _check_vocabulary(self):
msg = u"%(name)s - Vocabulary wasn't fitted." (check_is_fitted(self, u'vocabulary_', msg=msg),) if (len(self.vocabulary_) == 0): raise ValueError(u'Vocabulary is empty')
'Does nothing: this transformer is stateless. This method is just there to mark the fact that this transformer can work in a streaming setup.'
def partial_fit(self, X, y=None):
return self
'Does nothing: this transformer is stateless.'
def fit(self, X, y=None):
if isinstance(X, six.string_types): raise ValueError(u'Iterable over raw text documents expected, string object received.') self._get_hasher().fit(X, y=y) return self
'Transform a sequence of documents to a document-term matrix. Parameters X : iterable over raw text documents, length = n_samples Samples. Each sample must be a text document (either bytes or unicode strings, file name or file object depending on the constructor argument) which will be tokenized and hashed. Returns X :...
def transform(self, X):
if isinstance(X, six.string_types): raise ValueError(u'Iterable over raw text documents expected, string object received.') analyzer = self.build_analyzer() X = self._get_hasher().transform((analyzer(doc) for doc in X)) if self.binary: X.data.fill(1) if (self....
'Sort features by name Returns a reordered matrix and modifies the vocabulary in place'
def _sort_features(self, X, vocabulary):
sorted_features = sorted(six.iteritems(vocabulary)) map_index = np.empty(len(sorted_features), dtype=np.int32) for (new_val, (term, old_val)) in enumerate(sorted_features): vocabulary[term] = new_val map_index[old_val] = new_val X.indices = map_index.take(X.indices, mode=u'clip') ret...
'Remove too rare or too common features. Prune features that are non zero in more samples than high or less documents than low, modifying the vocabulary, and restricting it to at most the limit most frequent. This does not prune samples with zero features.'
def _limit_features(self, X, vocabulary, high=None, low=None, limit=None):
if ((high is None) and (low is None) and (limit is None)): return (X, set()) dfs = _document_frequency(X) tfs = np.asarray(X.sum(axis=0)).ravel() mask = np.ones(len(dfs), dtype=bool) if (high is not None): mask &= (dfs <= high) if (low is not None): mask &= (dfs >= low) ...
'Create sparse feature matrix, and vocabulary where fixed_vocab=False'
def _count_vocab(self, raw_documents, fixed_vocab):
if fixed_vocab: vocabulary = self.vocabulary_ else: vocabulary = defaultdict() vocabulary.default_factory = vocabulary.__len__ analyze = self.build_analyzer() j_indices = [] indptr = _make_int_array() values = _make_int_array() indptr.append(0) for doc in raw_docu...
'Learn a vocabulary dictionary of all tokens in the raw documents. Parameters raw_documents : iterable An iterable which yields either str, unicode or file objects. Returns self'
def fit(self, raw_documents, y=None):
self.fit_transform(raw_documents) return self
'Learn the vocabulary dictionary and return term-document matrix. This is equivalent to fit followed by transform, but more efficiently implemented. Parameters raw_documents : iterable An iterable which yields either str, unicode or file objects. Returns X : array, [n_samples, n_features] Document-term matrix.'
def fit_transform(self, raw_documents, y=None):
if isinstance(raw_documents, six.string_types): raise ValueError(u'Iterable over raw text documents expected, string object received.') self._validate_vocabulary() max_df = self.max_df min_df = self.min_df max_features = self.max_features (vocabulary, X) = self._c...
'Transform documents to document-term matrix. Extract token counts out of raw text documents using the vocabulary fitted with fit or the one provided to the constructor. Parameters raw_documents : iterable An iterable which yields either str, unicode or file objects. Returns X : sparse matrix, [n_samples, n_features] D...
def transform(self, raw_documents):
if isinstance(raw_documents, six.string_types): raise ValueError(u'Iterable over raw text documents expected, string object received.') if (not hasattr(self, u'vocabulary_')): self._validate_vocabulary() self._check_vocabulary() (_, X) = self._count_vocab(raw_docu...
'Return terms per document with nonzero entries in X. Parameters X : {array, sparse matrix}, shape = [n_samples, n_features] Returns X_inv : list of arrays, len = n_samples List of arrays of terms.'
def inverse_transform(self, X):
self._check_vocabulary() if sp.issparse(X): X = X.tocsr() else: X = np.asmatrix(X) n_samples = X.shape[0] terms = np.array(list(self.vocabulary_.keys())) indices = np.array(list(self.vocabulary_.values())) inverse_vocabulary = terms[np.argsort(indices)] return [inverse_vo...
'Array mapping from feature integer indices to feature name'
def get_feature_names(self):
self._check_vocabulary() return [t for (t, i) in sorted(six.iteritems(self.vocabulary_), key=itemgetter(1))]
'Learn the idf vector (global term weights) Parameters X : sparse matrix, [n_samples, n_features] a matrix of term/token counts'
def fit(self, X, y=None):
if (not sp.issparse(X)): X = sp.csc_matrix(X) if self.use_idf: (n_samples, n_features) = X.shape df = _document_frequency(X) df += int(self.smooth_idf) n_samples += int(self.smooth_idf) idf = (np.log((float(n_samples) / df)) + 1.0) self._idf_diag = sp.spdi...
'Transform a count matrix to a tf or tf-idf representation Parameters X : sparse matrix, [n_samples, n_features] a matrix of term/token counts copy : boolean, default True Whether to copy X and operate on the copy or perform in-place operations. Returns vectors : sparse matrix, [n_samples, n_features]'
def transform(self, X, copy=True):
if (hasattr(X, u'dtype') and np.issubdtype(X.dtype, np.float)): X = sp.csr_matrix(X, copy=copy) else: X = sp.csr_matrix(X, dtype=np.float64, copy=copy) (n_samples, n_features) = X.shape if self.sublinear_tf: np.log(X.data, X.data) X.data += 1 if self.use_idf: ...
'Learn vocabulary and idf from training set. Parameters raw_documents : iterable an iterable which yields either str, unicode or file objects Returns self : TfidfVectorizer'
def fit(self, raw_documents, y=None):
X = super(TfidfVectorizer, self).fit_transform(raw_documents) self._tfidf.fit(X) return self
'Learn vocabulary and idf, return term-document matrix. This is equivalent to fit followed by transform, but more efficiently implemented. Parameters raw_documents : iterable an iterable which yields either str, unicode or file objects Returns X : sparse matrix, [n_samples, n_features] Tf-idf-weighted document-term mat...
def fit_transform(self, raw_documents, y=None):
X = super(TfidfVectorizer, self).fit_transform(raw_documents) self._tfidf.fit(X) return self._tfidf.transform(X, copy=False)
'Transform documents to document-term matrix. Uses the vocabulary and document frequencies (df) learned by fit (or fit_transform). Parameters raw_documents : iterable an iterable which yields either str, unicode or file objects copy : boolean, default True Whether to copy X and operate on the copy or perform in-place o...
def transform(self, raw_documents, copy=True):
check_is_fitted(self, u'_tfidf', u'The tfidf vector is not fitted') X = super(TfidfVectorizer, self).transform(raw_documents) return self._tfidf.transform(X, copy=False)
'Do nothing and return the estimator unchanged This method is just there to implement the usual API and hence work in pipelines.'
def fit(self, X, y=None):
return self
'Transforms the image samples in X into a matrix of patch data. Parameters X : array, shape = (n_samples, image_height, image_width) or (n_samples, image_height, image_width, n_channels) Array of images from which to extract patches. For color images, the last dimension specifies the channel: a RGB image would have `n_...
def transform(self, X):
self.random_state = check_random_state(self.random_state) (n_images, i_h, i_w) = X.shape[:3] X = np.reshape(X, (n_images, i_h, i_w, (-1))) n_channels = X.shape[(-1)] if (self.patch_size is None): patch_size = ((i_h // 10), (i_w // 10)) else: patch_size = self.patch_size (p_h,...
'The dummy arguments are to test that this fit function can accept non-array arguments through cross-validation, such as: - int - str (this is actually array-like) - object - function'
def fit(self, X, Y=None, sample_weight=None, class_prior=None, sparse_sample_weight=None, sparse_param=None, dummy_int=None, dummy_str=None, dummy_obj=None, callback=None):
self.dummy_int = dummy_int self.dummy_str = dummy_str self.dummy_obj = dummy_obj if (callback is not None): callback(self) if self.allow_nd: X = X.reshape(len(X), (-1)) if ((X.ndim >= 3) and (not self.allow_nd)): raise ValueError('X cannot be d') if (sample_w...
'Fit the SVM model according to the given training data. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Training vectors, where n_samples is the number of samples and n_features is the number of features. For kernel="precomputed", the expected shape of X is (n_samples, n_samples). y : array-l...
def fit(self, X, y, sample_weight=None):
rnd = check_random_state(self.random_state) sparse = sp.isspmatrix(X) if (sparse and (self.kernel == 'precomputed')): raise TypeError('Sparse precomputed kernels are not supported.') self._sparse = (sparse and (not callable(self.kernel))) (X, y) = check_X_y(X, y, dtype=np.floa...
'Validation of y and class_weight. Default implementation for SVR and one-class; overridden in BaseSVC.'
def _validate_targets(self, y):
self.class_weight_ = np.empty(0) return column_or_1d(y, warn=True).astype(np.float64)
'Perform regression on samples in X. For an one-class model, +1 (inlier) or -1 (outlier) is returned. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) For kernel="precomputed", the expected shape of X is (n_samples_test, n_samples_train). Returns y_pred : array, shape (n_samples,)'
def predict(self, X):
X = self._validate_for_predict(X) predict = (self._sparse_predict if self._sparse else self._dense_predict) return predict(X)
'Return the data transformed by a callable kernel'
def _compute_kernel(self, X):
if callable(self.kernel): kernel = self.kernel(X, self.__Xfit) if sp.issparse(kernel): kernel = kernel.toarray() X = np.asarray(kernel, dtype=np.float64, order='C') return X
'Distance of the samples X to the separating hyperplane. Parameters X : array-like, shape (n_samples, n_features) Returns X : array-like, shape (n_samples, n_class * (n_class-1) / 2) Returns the decision function of the sample for each class in the model.'
def _decision_function(self, X):
X = self._validate_for_predict(X) X = self._compute_kernel(X) if self._sparse: dec_func = self._sparse_decision_function(X) else: dec_func = self._dense_decision_function(X) if ((self._impl in ['c_svc', 'nu_svc']) and (len(self.classes_) == 2)): return (- dec_func.ravel()) ...
'Distance of the samples X to the separating hyperplane. Parameters X : array-like, shape (n_samples, n_features) Returns X : array-like, shape (n_samples, n_classes * (n_classes-1) / 2) Returns the decision function of the sample for each class in the model. If decision_function_shape=\'ovr\', the shape is (n_samples,...
def decision_function(self, X):
dec = self._decision_function(X) if ((self.decision_function_shape == 'ovr') and (len(self.classes_) > 2)): return _ovr_decision_function((dec < 0), (- dec), len(self.classes_)) return dec
'Perform classification on samples in X. For an one-class model, +1 or -1 is returned. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) For kernel="precomputed", the expected shape of X is [n_samples_test, n_samples_train] Returns y_pred : array, shape (n_samples,) Class labels for samples in X...
def predict(self, X):
y = super(BaseSVC, self).predict(X) return self.classes_.take(np.asarray(y, dtype=np.intp))
'Compute probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute `probability` set to True. Parameters X : array-like, shape (n_samples, n_features) For kernel="precomputed", the expected shape of X is [n_samples_test, n_samples_t...
@property def predict_proba(self):
self._check_proba() return self._predict_proba
'Compute log probabilities of possible outcomes for samples in X. The model need to have probability information computed at training time: fit with attribute `probability` set to True. Parameters X : array-like, shape (n_samples, n_features) For kernel="precomputed", the expected shape of X is [n_samples_test, n_sampl...
@property def predict_log_proba(self):
self._check_proba() return self._predict_log_proba
'Fit the model according to the given training data. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vector, where n_samples in the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] Target vector relative to X sample_weight : array-like,...
def fit(self, X, y, sample_weight=None):
msg = "loss='%s' has been deprecated in favor of loss='%s' as of 0.16. Backward compatibility for the loss='%s' will be removed in %s" if (self.loss in ('l1', 'l2')): old_loss = self.loss self.loss = {'l1': 'hinge', 'l2': 'squared_hinge...
'Fit the model according to the given training data. Parameters X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vector, where n_samples in the number of samples and n_features is the number of features. y : array-like, shape = [n_samples] Target vector relative to X sample_weight : array-like,...
def fit(self, X, y, sample_weight=None):
msg = "loss='%s' has been deprecated in favor of loss='%s' as of 0.16. Backward compatibility for the loss='%s' will be removed in %s" if (self.loss in ('l1', 'l2')): old_loss = self.loss self.loss = {'l1': 'epsilon_insensitive', 'l2': ...
'Detects the soft boundary of the set of samples X. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) Set of samples, where n_samples is the number of samples and n_features is the number of features. sample_weight : array-like, shape (n_samples,) Per-sample weights. Rescale C per sample. Higher...
def fit(self, X, y=None, sample_weight=None, **params):
super(OneClassSVM, self).fit(X, np.ones(_num_samples(X)), sample_weight=sample_weight, **params) return self
'Signed distance to the separating hyperplane. Signed distance is positive for an inlier and negative for an outlier. Parameters X : array-like, shape (n_samples, n_features) Returns X : array-like, shape (n_samples,) Returns the decision function of the samples.'
def decision_function(self, X):
dec = self._decision_function(X) return dec
'Perform classification on samples in X. For an one-class model, +1 or -1 is returned. Parameters X : {array-like, sparse matrix}, shape (n_samples, n_features) For kernel="precomputed", the expected shape of X is [n_samples_test, n_samples_train] Returns y_pred : array, shape (n_samples,) Class labels for samples in X...
def predict(self, X):
y = super(OneClassSVM, self).predict(X) return np.asarray(y, dtype=np.intp)
'Return non-default make_scorer arguments for repr.'
def _factory_args(self):
return ''
'Evaluate predicted target values for X relative to y_true. Parameters estimator : object Trained estimator to use for scoring. Must have a predict_proba method; the output of that is used to compute the score. X : array-like or sparse matrix Test data that will be fed to estimator.predict. y_true : array-like Gold sta...
def __call__(self, estimator, X, y_true, sample_weight=None):
super(_PredictScorer, self).__call__(estimator, X, y_true, sample_weight=sample_weight) y_pred = estimator.predict(X) if (sample_weight is not None): return (self._sign * self._score_func(y_true, y_pred, sample_weight=sample_weight, **self._kwargs)) else: return (self._sign * self._score...
'Evaluate predicted probabilities for X relative to y_true. Parameters clf : object Trained classifier to use for scoring. Must have a predict_proba method; the output of that is used to compute the score. X : array-like or sparse matrix Test data that will be fed to clf.predict_proba. y : array-like Gold standard targ...
def __call__(self, clf, X, y, sample_weight=None):
super(_ProbaScorer, self).__call__(clf, X, y, sample_weight=sample_weight) y_pred = clf.predict_proba(X) if (sample_weight is not None): return (self._sign * self._score_func(y, y_pred, sample_weight=sample_weight, **self._kwargs)) else: return (self._sign * self._score_func(y, y_pred, *...