imsegm.pipelines module¶
Pipelines for supervised and unsupervised segmentation
Copyright (C) 2014-2018 Jiri Borovec <jiri.borovec@fel.cvut.cz>
- imsegm.pipelines.compute_color2d_superpixels_features(image, dict_features, sp_size=30, sp_regul=0.2)[source]¶
segment image into superpixels and estimate features per superpixel
- Parameters
- Return list(list(int)), [[floats]]
superpixels and related of features
- imsegm.pipelines.estim_model_classes_group(list_images, nb_classes, dict_features, sp_size=30, sp_regul=0.2, use_scaler=True, pca_coef=None, model_type='GMM', nb_workers=1)[source]¶
estimate a model from sequence of input images and return it as result
- Parameters
list_images (list(ndarray)) –
nb_classes (int) – number of classes
sp_size (int) – initial size of a superpixel(meaning edge lenght)
sp_regul (float) – regularisation in range(0;1) where “0” gives elastic and “1” nearly square slic
dict_features (dict(list(str))) – list of features to be extracted
pca_coef (float) – range (0, 1) or None
use_scaler (bool) – whether use a scaler
model_type (str) – model type
nb_workers (int) – number of jobs running in parallel
- Returns
- imsegm.pipelines.pipe_color2d_slic_features_model_graphcut(image, nb_classes, dict_features, sp_size=30, sp_regul=0.2, pca_coef=None, use_scaler=True, estim_model='GMM', gc_regul=1.0, gc_edge_type='model', debug_visual=None)[source]¶
complete pipe-line for segmentation using superpixels, extracting features and graphCut segmentation
- Parameters
image (ndarray) – input RGB image
nb_classes (int) – number of classes to be segmented(indexing from 0)
sp_size (int) – initial size of a superpixel(meaning edge length)
sp_regul (float) – regularisation in range(0,1) where 0 gives elastic and 1 nearly square slic
dict_features (dict) – {clr: list(str)}
pca_coef (float) – range (0, 1) or None
estim_model (str) – estimating model
gc_regul (float) – GC regularisation
gc_edge_type (str) – graphCut edge type
use_scaler (bool) – using scaler block in pipeline
l (dict) –
- Return list(list(int))
segmentation matrix maping each pixel into a class
>>> np.random.seed(0) >>> image = np.random.random((125, 150, 3)) / 2. >>> image[:, :75] += 0.5 >>> segm, seg_soft = pipe_color2d_slic_features_model_graphcut(image, 2, {'color': ['mean']}) >>> segm.shape (125, 150) >>> seg_soft.shape (125, 150, 2)
- imsegm.pipelines.pipe_gray3d_slic_features_model_graphcut(image, nb_classes, dict_features, spacing=(12, 1, 1), sp_size=15, sp_regul=0.2, gc_regul=0.1)[source]¶
complete pipe-line for segmentation using superpixels, extracting features and graphCut segmentation
- Parameters
image (ndarray) – input RGB image
sp_size (int) – initial size of a superpixel(meaning edge lenght)
sp_regul (float) – regularisation in range(0;1) where “0” gives elastic and “1” nearly square segments
nb_classes (int) – number of classes to be segmented(indexing from 0)
gc_regul (float) – regularisation for GC
- Return list(list(int))
segmentation matrix maping each pixel into a class
>>> np.random.seed(0) >>> image = np.random.random((5, 125, 150)) / 2. >>> image[:, :, :75] += 0.5 >>> segm = pipe_gray3d_slic_features_model_graphcut(image, 2, {'color': ['mean']}) >>> segm.shape (5, 125, 150)
- imsegm.pipelines.segment_color2d_slic_features_model_graphcut(image, model_pipeline, dict_features, sp_size=30, sp_regul=0.2, gc_regul=1.0, gc_edge_type='model', debug_visual=None)[source]¶
complete pipe-line for segmentation using superpixels, extracting features and graphCut segmentation
- Parameters
image (ndarray) – input RGB image
model_pipeline (obj) –
sp_size (int) – initial size of a superpixel(meaning edge lenght)
sp_regul (float) – regularisation in range(0;1) where “0” gives elastic and “1” nearly square slic
dict_features (dict(list(str))) – list of features to be extracted
gc_regul (float) – GC regularisation
gc_edge_type (str) – select the GC edge type
debug_visual – dict
- Return list(list(int))
segmentation matrix mapping each pixel into a class
Examples
>>> # UnSupervised: >>> import imsegm.descriptors as seg_fts >>> np.random.seed(0) >>> seg_fts.USE_CYTHON = False >>> image = np.random.random((125, 150, 3)) / 2. >>> image[:, :75] += 0.5 >>> model, _ = estim_model_classes_group([image], 2, {'color': ['mean']}) >>> segm, seg_soft = segment_color2d_slic_features_model_graphcut(image, model, {'color': ['mean']}) >>> segm.shape (125, 150) >>> seg_soft.shape (125, 150, 2)
>>> # Supervised: >>> import imsegm.descriptors as seg_fts >>> np.random.seed(0) >>> seg_fts.USE_CYTHON = False >>> image = np.random.random((125, 150, 3)) / 2. >>> image[:, 75:] += 0.5 >>> annot = np.zeros(image.shape[:2], dtype=int) >>> annot[:, 75:] = 1 >>> clf, _, _, _ = train_classif_color2d_slic_features([image], [annot], {'color': ['mean']}) >>> segm, seg_soft = segment_color2d_slic_features_model_graphcut(image, clf, {'color': ['mean']}) >>> segm.shape (125, 150) >>> seg_soft.shape (125, 150, 2)
- imsegm.pipelines.train_classif_color2d_slic_features(list_images, list_annots, dict_features, sp_size=30, sp_regul=0.2, clf_name='RandForest', label_purity=0.9, feature_balance='unique', pca_coef=None, nb_classif_search=1, nb_hold_out=2, nb_workers=1)[source]¶
train classifier on list of annotated images
- Parameters
list_images (list(ndarray)) –
list_annots (list(ndarray)) –
sp_size (int) – initial size of a superpixel(meaning edge lenght)
sp_regul (float) – regularisation in range(0;1) where “0” gives elastic and “1” nearly square segments
dict_features (dict(list(str))) – list of features to be extracted
clf_name (str) – selet udsed classifier
label_purity (float) – set the sample-labels purity for training
feature_balance (str) – set how to balance datasets
pca_coef (float) – select PCA coef or None
nb_classif_search (int) – number of tries for hyper-parameters seach
nb_hold_out (int) – cross-val leave out
nb_workers (int) – parallelism
- Returns
- imsegm.pipelines.wrapper_compute_color2d_slic_features_labels(img_annot, sp_size, sp_regul, dict_features, label_purity)[source]¶
- imsegm.pipelines.CLASSIF_NAME = 'RandForest'[source]¶
select default Classifier for supervised segmentation
- imsegm.pipelines.CLUSTER_METHOD = 'kMeans'[source]¶
select default Modeling/clustering for unsupervised segmentation
- imsegm.pipelines.CROSS_VAL_LEAVE_OUT = 2[source]¶
define how many images will be left out during cross-validation training