languageflow

Flow

class languageflow.flow.Flow[source]

Pipeline to build a model

Examples

>>> from languageflow.flow import Flow
>>> flow = Flow()
>>> flow.data(X, y)
>>> flow.transform(TfidfTransformer())
>>> model = Model(SGD(), "SGD")
>>> flow.add_model(model)
>>> flow.train()
add_model(model)[source]

Add model to flow

add_score(score)[source]
data(X=None, y=None, sentences=None)[source]

Add data to flow

export(model_name, export_folder)[source]

Export model and transformers to export_folder

Parameters:
  • model_name (string) – name of model to export
  • export_folder (string) – folder to store exported model and transformers
set_learning_curve(start, stop, offset)[source]
set_validation(validation)[source]
train()[source]

Train model with transformed data

transform(transformer)[source]

Add transformer to flow and apply transformer to data in flow

Parameters:transformer (Transformer) – a transformer to transform data

languageflow.transformer

NumberRemover

class languageflow.transformer.number.NumberRemover[source]

Remove numbers in documents

transform(raw_documents)[source]

Remove number in each document

Parameters:raw_documents (iterable) – An iterable which yields either str, unicode
Returns:X – cleaned documents
Return type:iterable

CountVectorizer

class languageflow.transformer.count.CountVectorizer(input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, stop_words=None, token_pattern='(?u)\b\w\w+\b', ngram_range=(1, 1), analyzer='word', max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<sphinx.ext.autodoc._MockModule object>)[source]

Convert a collection of text documents to a matrix of token counts This implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix. If you do not provide an a-priori dictionary and you do not use an analyzer that does some kind of feature selection then the number of features will be equal to the vocabulary size found by analyzing the data. Read more in the User Guide.

Parameters:
  • input (string {'filename', 'file', 'content'}) – If ‘filename’, the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze. If ‘file’, the sequence items must have a ‘read’ method (file-like object) that is called to fetch the bytes in memory. Otherwise the input is expected to be the sequence strings or bytes items are expected to be analyzed directly.
  • encoding (string, 'utf-8' by default.) – If bytes or files are given to analyze, this encoding is used to decode.
  • decode_error ({'strict', 'ignore', 'replace'}) – Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’.
  • strip_accents ({'ascii', 'unicode', None}) – Remove accents during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have an direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. None (default) does nothing.
  • analyzer (string, {'word', 'char', 'char_wb'} or callable) – Whether the feature should be made of word or character n-grams. Option ‘char_wb’ creates character n-grams only from text inside word boundaries; n-grams at the edges of words are padded with space. If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input.
  • preprocessor (callable or None (default)) – Override the preprocessing (string transformation) stage while preserving the tokenizing and n-grams generation steps.
  • tokenizer (callable or None (default)) – Override the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if analyzer == 'word'.
  • ngram_range (tuple (min_n, max_n)) – The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min_n <= n <= max_n will be used.
  • stop_words (string {'english'}, list, or None (default)) – If ‘english’, a built-in stop word list for English is used. If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer == 'word'. If None, no stop words will be used. max_df can be set to a value in the range [0.7, 1.0) to automatically detect and filter stop words based on intra corpus document frequency of terms.
  • lowercase (boolean, True by default) – Convert all characters to lowercase before tokenizing.
  • token_pattern (string) – Regular expression denoting what constitutes a “token”, only used if analyzer == 'word'. The default regexp select tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator).
  • max_df (float in range [0.0, 1.0] or int, default=1.0) – When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
  • min_df (float in range [0.0, 1.0] or int, default=1) – When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
  • max_features (int or None, default=None) – If not None, build a vocabulary that only consider the top max_features ordered by term frequency across the corpus. This parameter is ignored if vocabulary is not None.
  • vocabulary (Mapping or iterable, optional) – Either a Mapping (e.g., a dict) where keys are terms and values are indices in the feature matrix, or an iterable over terms. If not given, a vocabulary is determined from the input documents. Indices in the mapping should not be repeated and should not have any gap between 0 and the largest index.
  • binary (boolean, default=False) – If True, all non zero counts are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts.
  • dtype (type, optional) – Type of the matrix returned by fit_transform() or transform().
vocabulary_

dict – A mapping of terms to feature indices.

stop_words_

set

Terms that were ignored because they either:
  • occurred in too many documents (max_df)
  • occurred in too few documents (min_df)
  • were cut off by feature selection (max_features).

This is only available if no vocabulary was given.

See also

HashingVectorizer, TfidfVectorizer

Notes

The stop_words_ attribute can get large and increase the model size when pickling. This attribute is provided only for introspection and can be safely removed using delattr or set to None before pickling.

fit_transform(raw_documents, y=None)[source]

Learn the vocabulary dictionary and return term-document matrix. This is equivalent to fit followed by transform, but more efficiently implemented.

Parameters:raw_documents (iterable) – An iterable which yields either str, unicode or file objects.
Returns:X – Document-term matrix.
Return type:array, [n_samples, n_features]

TfidfVectorizer

class languageflow.transformer.tfidf.TfidfVectorizer(input='content', encoding='utf-8', decode_error='strict', strip_accents=None, lowercase=True, preprocessor=None, tokenizer=None, analyzer='word', stop_words=None, token_pattern='(?u)\b\w\w+\b', ngram_range=(1, 1), max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False, dtype=<sphinx.ext.autodoc._MockModule object>, norm='l2', use_idf=True, smooth_idf=True, sublinear_tf=False)[source]

Convert a collection of raw documents to a matrix of TF-IDF features.

Parameters:
  • input (string {'filename', 'file', 'content'}) – If ‘filename’, the sequence passed as an argument to fit is expected to be a list of filenames that need reading to fetch the raw content to analyze. If ‘file’, the sequence items must have a ‘read’ method (file-like object) that is called to fetch the bytes in memory. Otherwise the input is expected to be the sequence strings or bytes items are expected to be analyzed directly.
  • encoding (string, 'utf-8' by default.) – If bytes or files are given to analyze, this encoding is used to decode.
  • decode_error ({'strict', 'ignore', 'replace'}) – Instruction on what to do if a byte sequence is given to analyze that contains characters not of the given encoding. By default, it is ‘strict’, meaning that a UnicodeDecodeError will be raised. Other values are ‘ignore’ and ‘replace’.
  • strip_accents ({'ascii', 'unicode', None}) – Remove accents during the preprocessing step. ‘ascii’ is a fast method that only works on characters that have an direct ASCII mapping. ‘unicode’ is a slightly slower method that works on any characters. None (default) does nothing.
  • analyzer (string, {'word', 'char'} or callable) – Whether the feature should be made of word or character n-grams. If a callable is passed it is used to extract the sequence of features out of the raw, unprocessed input.
  • preprocessor (callable or None (default)) – Override the preprocessing (string transformation) stage while preserving the tokenizing and n-grams generation steps.
  • tokenizer (callable or None (default)) – Override the string tokenization step while preserving the preprocessing and n-grams generation steps. Only applies if analyzer == 'word'.
  • ngram_range (tuple (min_n, max_n)) – The lower and upper boundary of the range of n-values for different n-grams to be extracted. All values of n such that min_n <= n <= max_n will be used.
  • stop_words (string {'english'}, list, or None (default)) – If a string, it is passed to _check_stop_list and the appropriate stop list is returned. ‘english’ is currently the only supported string value. If a list, that list is assumed to contain stop words, all of which will be removed from the resulting tokens. Only applies if analyzer == 'word'. If None, no stop words will be used. max_df can be set to a value in the range [0.7, 1.0) to automatically detect and filter stop words based on intra corpus document frequency of terms.
  • lowercase (boolean, default True) – Convert all characters to lowercase before tokenizing.
  • token_pattern (string) – Regular expression denoting what constitutes a “token”, only used if analyzer == 'word'. The default regexp selects tokens of 2 or more alphanumeric characters (punctuation is completely ignored and always treated as a token separator).
  • max_df (float in range [0.0, 1.0] or int, default=1.0) – When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold (corpus-specific stop words). If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
  • min_df (float in range [0.0, 1.0] or int, default=1) – When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.
  • max_features (int or None, default=None) – If not None, build a vocabulary that only consider the top max_features ordered by term frequency across the corpus. This parameter is ignored if vocabulary is not None.
  • vocabulary (Mapping or iterable, optional) – Either a Mapping (e.g., a dict) where keys are terms and values are indices in the feature matrix, or an iterable over terms. If not given, a vocabulary is determined from the input documents.
  • binary (boolean, default=False) – If True, all non-zero term counts are set to 1. This does not mean outputs will have only 0/1 values, only that the tf term in tf-idf is binary. (Set idf and normalization to False to get 0/1 outputs.)
  • dtype (type, optional) – Type of the matrix returned by fit_transform() or transform().
  • norm ('l1', 'l2' or None, optional) – Norm used to normalize term vectors. None for no normalization.
  • use_idf (boolean, default=True) – Enable inverse-document-frequency reweighting.
  • smooth_idf (boolean, default=True) – Smooth idf weights by adding one to document frequencies, as if an extra document was seen containing every term in the collection exactly once. Prevents zero divisions.
  • sublinear_tf (boolean, default=False) – Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf).
vocabulary_

dict – A mapping of terms to feature indices.

idf_

array, shape = [n_features], or None – The learned idf vector (global term weights) when use_idf is set to True, None otherwise.

stop_words_

set

Terms that were ignored because they either:
  • occurred in too many documents (max_df)
  • occurred in too few documents (min_df)
  • were cut off by feature selection (max_features).

This is only available if no vocabulary was given.

fit_transform(raw_documents, y=None)[source]

Learn vocabulary and idf, return term-document matrix. This is equivalent to fit followed by transform, but more efficiently implemented. :param raw_documents: an iterable which yields either str, unicode or file objects :type raw_documents: iterable

Returns:X – Tf-idf-weighted document-term matrix.
Return type:sparse matrix, [n_samples, n_features]

languageflow.model

SGDClassifier

class languageflow.model.sgd.SGDClassifier(*args, **kwargs)[source]

Linear classifiers (SVM, logistic regression, a.o.) with SGD training.

This estimator implements regularized linear models with stochastic gradient descent (SGD) learning: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate). SGD allows minibatch (online/out-of-core) learning, see the partial_fit method. For best results using the default learning rate schedule, the data should have zero mean and unit variance.

This implementation works with data represented as dense or sparse arrays of floating point values for the features. The model it fits can be controlled with the loss parameter; by default, it fits a linear support vector machine (SVM).

The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection.

fit(X, y, coef_init=None, intercept_init=None, sample_weight=None)[source]

Fit linear model with Stochastic Gradient Descent.

Parameters:
  • X ({array-like, sparse matrix}, shape (n_samples, n_features)) – Training data
  • y (numpy array, shape (n_samples,)) – Target values
  • coef_init (array, shape (n_classes, n_features)) – The initial coefficients to warm-start the optimization.
  • intercept_init (array, shape (n_classes,)) – The initial intercept to warm-start the optimization.
  • sample_weight (array-like, shape (n_samples,), optional) – Weights applied to individual samples. If not provided, uniform weights are assumed. These weights will be multiplied with class_weight (passed through the constructor) if class_weight is specified
Returns:

self

Return type:

returns an instance of self.

predict(X)[source]

Predict class labels for samples in X.

Parameters:X ({array-like, sparse matrix}, shape = [n_samples, n_features]) – Samples.
Returns:C – Predicted class label per sample.
Return type:array, shape = [n_samples]

XGBoostClassifier

class languageflow.model.xgboost.XGBoostClassifier(base_estimator='gbtree', objective='multi:softprob', metric='mlogloss', num_classes=9, learning_rate=0.25, max_depth=10, max_samples=1.0, max_features=1.0, max_delta_step=0, min_child_weight=4, min_loss_reduction=1, l1_weight=0.0, l2_weight=0.0, l2_on_bias=False, gamma=0.02, inital_bias=0.5, random_state=None, watchlist=None, n_jobs=4, n_iter=150, silent=1, verbose_eval=True)[source]

A simple wrapper around XGBoost More details: https://github.com/dmlc/xgboost/wiki/Parameters

Parameters:
  • base_estimator (string) –
    Can be ‘gbtree’ or ‘gblinear’
    • ‘gbtree’ : classification
    • ‘gblinear’ : regression
  • gamma (float) – minimum loss reduction required to make a partition, higher values mean more conservative boosting
  • max_depth (int) – maximum depth of a tree
  • min_child_weight (int) – larger values mean more conservative partitioning
  • objective (string) –
    Specify the learning task and the corresponding learning objective or a custom objective function to be used
    • ‘reg:linear’ : linear regression
    • ‘reg:logistic’ : logistic regression
    • ‘binary:logistic’ : binary logistic regression
    • ‘binary:logitraw’ - binary logistic regression before logistic transformation
    • ‘multi:softmax’ : multiclass classification
    • ‘multi:softprob’ : multiclass classification with class probability output
    • ‘rank:pairwise’ : pairwise minimize loss
  • metric (string) –
    Evaluation metrics:
    • ‘rmse’ - root mean square error
    • ‘logloss’ - negative log likelihood
    • ‘error’ - binary classification error rate
    • ‘merror’ - multiclass error rate
    • ‘mlogloss’ - multiclass logloss
    • ‘auc’ - area under the curve for ranking evaluation
    • ‘ndcg’ - normalized discounted cumulative gain ndcg@n for top n eval
    • ‘map’ - mean average precision map@n for top n eval
fit(X, y=None)[source]
Parameters:
  • X ({array-like, sparse matrix}) – Training data. Shape (n_samples, n_features)
  • y (numpy array) – Target values. Shape (n_samples,)
Returns:

self – returns an instance of self.

Return type:

C

get_params(deep=False)[source]
predict(X)[source]
predict_proba(X)[source]
set_params(**parameters)[source]

KimCNNClassifier

class languageflow.model.cnn.KimCNNClassifier(batch_size=50, kernel_sizes=[3, 4, 5], num_kernel=100, embedding_dim=50, epoch=50, lr=0.001)[source]

An implementation of the model from Kim2014 paper

Parameters:
  • batch_size (int) – Number of samples per gradient update
  • kernel_sizes (list of int) –
  • num_kernel (int) –
  • embedding_dim (int) – only for CNN-rand
  • epoch (int) – Number of epochs to train the model
  • lr (float, optional) – Learning rate (default: 1e-3)

Examples

>>> from languageflow.flow import Flow
>>> flow = Flow()
>>> flow.data(X, y)
>>> model = Model(KimCNNClassifier(batch_size=5, epoch=150, embedding_dim=300)
>>> flow.add_model(model, "KimCNNClassifier"))
>>> flow.train()
fit(X, y)[source]

Fit KimCNNClassifier according to X, y

Parameters:
  • X (list of string) – each item is a raw text
  • y (list of string) – each item is a label
predict(X)[source]
Parameters:X (list of string) – Raw texts
Returns:C – List labels
Return type:list of string

FastTextClassifier

class languageflow.model.fasttext.FastTextClassifier[source]

Only support multiclass classification

fit(X, y, model_filename=None)[source]

Fit FastText according to X, y

Parameters:
  • X (list of string) – each item is a raw text
  • y (list of string) – each item is a label
predict(X)[source]

In order to obtain the most likely label for a list of text

Parameters:X (list of string) – Raw texts
Returns:C – List labels
Return type:list of string

CRF

class languageflow.model.crf.CRF(params={'c2': 0.01, 'c1': 0.1, 'feature.minfreq': 0}, filename=None)[source]
fit(X, y)[source]

Fit CRF according to X, y

Parameters:
  • X (list of text) – each item is a text
  • y (list) – each item is either a label (in multi class problem) or list of labels (in multi label problem)
predict(X)[source]

Predict class labels for samples in X.

Parameters:X ({array-like, sparse matrix}, shape = [n_samples, n_features]) – Samples.

languageflow.log

Analyze and save test results.

MulticlassLogger

class languageflow.log.multiclass.MulticlassLogger[source]

Analyze and save multiclass results

static log(X_test, y_test, y_pred, folder)[source]
Parameters:
  • X_test (list of string) – Raw texts
  • y_test (list of string) – Test labels
  • y_pred (list of string) – Predict labels
  • folder (string) – log folder

MultilabelLogger

class languageflow.log.multilabel.MultilabelLogger[source]

Analyze and save multilabel results to multilabel.json and result.json files

static log(X_test, y_test, y_pred, log_folder)[source]
Parameters:
  • X_test (list of string) – Raw texts
  • y_test (list of string) – Test labels
  • y_pred (list of string) – Predict labels
  • log_folder (string) – path to log folder

TfidfLogger

class languageflow.log.tfidf.TfidfLogger[source]

Analyze and save tfidf results

static log(model_folder, binary_file='tfidf.transformer.bin', log_folder='analyze')[source]
Parameters:
  • model_folder (string) – folder contains binaries file of model
  • binary_file (string) – file path to tfidf binary file
  • log_folder (string) – log folder

CountLogger

class languageflow.log.count.CountLogger[source]

Analyze and save tfidf results

static log(model_folder, binary_file='count.transformer.bin', log_folder='analyze')[source]
Parameters:
  • model_folder (string) – folder contains binaries file of model
  • binary_file (string) – file path to count transformer binary file
  • log_folder (string) – log folder

languageflow.board

Board

class languageflow.board.Board(log_folder)[source]

Visualize analyzed results

Examples

>>> from languageflow.board import Board
>>> from languageflow.log.tfidf import TfidfLogger
>>> log_folder = join(dirname(__file__), "log")
>>> model_folder = join(dirname(__file__), "model")
>>> board = Board(log_folder)
>>> MultilabelLogger.log(X_test, y_test, y_pred, log_folder=log_folder)
>>> TfidfLogger.log(model_folder=model_folder, log_folder=log_folder)
>>> board.serve(port=62000)
log_folder = None

Reset log folder

Parameters:log_folder (string) – path to log folder
serve(port=62000)[source]

Start LanguageBoard web application

Parameters:port (int) – port to serve web application