imsegm.labeling module¶
Framework for labeling
Copyright (C) 2014-2018 Jiri Borovec <jiri.borovec@fel.cvut.cz>
-
imsegm.labeling.
assign_label_by_max
(label_hist)[source]¶ assign label according maximal label count in particular region
Parameters: label_hist (dict(list(int))) – mapping of label to histogram Return list(int): resulting LookUpTable >>> slic = np.array([[0] * 4 + [1] * 3 + [2] * 3 + [3] * 3] * 4 + ... [[4] * 3 + [5] * 3 + [6] * 3 + [7] * 4] * 4) >>> segm = np.zeros(slic.shape, dtype=int) >>> segm[4:, 6:] = 1 >>> lb_hist = segm_labels_assignment(slic, segm) >>> assign_label_by_max(lb_hist) array([0, 0, 0, 0, 0, 0, 1, 1])
-
imsegm.labeling.
assign_label_by_threshold
(dict_label_hist, thresh=0.75)[source]¶ assign label if the purity reach certain threshold
Parameters: Return list(int): resulting LookUpTable
>>> slic = np.array([[0] * 4 + [1] * 3 + [2] * 3 + [3] * 3] * 4 + ... [[4] * 3 + [5] * 3 + [6] * 3 + [7] * 4] * 4) >>> segm = np.zeros(slic.shape, dtype=int) >>> segm[4:, 6:] = 1 >>> lb_hist = segm_labels_assignment(slic, segm) >>> assign_label_by_threshold(lb_hist) array([0, 0, 0, 0, 0, 0, 1, 1])
-
imsegm.labeling.
assume_bg_on_boundary
(segm, bg_label=0, boundary_size=1)[source]¶ swap labels such that the background label will be mostly on image boundary
Parameters: Returns: >>> segm = np.zeros((6, 12), dtype=int) >>> segm[1:4, 4:] = 2 >>> assume_bg_on_boundary(segm, boundary_size=1) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2], [0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2], [0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) >>> segm[segm == 0] = 1 >>> assume_bg_on_boundary(segm, boundary_size=1) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2], [0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2], [0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
-
imsegm.labeling.
binary_image_from_coords
(coords, size)[source]¶ create binary image just from point contours
Parameters: Return ndarray: >>> img = np.zeros((6, 6), dtype=int) >>> img[1:5, 2:] = 1 >>> coords = contour_coords(img) >>> binary_image_from_coords(coords, img.shape) array([[0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 0], [0, 0, 1, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0]])
-
imsegm.labeling.
compute_boundary_distances
(segm_ref, segm)[source]¶ compute distances among boundaries of two segmentation
Parameters: - segm_ref (ndarray) – reference segmentation
- segm (ndarray) – input segmentation
Return ndarray: >>> segm_ref = np.zeros((6, 10), dtype=int) >>> segm_ref[3:4, 4:5] = 1 >>> segm = np.zeros((6, 10), dtype=int) >>> segm[:, 2:9] = 1 >>> pts, dist = compute_boundary_distances(segm_ref, segm) >>> pts array([[2, 4], [3, 3], [3, 4], [3, 5], [4, 4]]) >>> dist.tolist() [2.0, 1.0, 2.0, 3.0, 2.0]
-
imsegm.labeling.
compute_distance_map
(seg, label=1)[source]¶ compute distance from label boundaries
Parameters: - seg (ndarray) – integer images, typically a segmentation
- label (int) – selected singe label in segmentation
Return ndarray: >>> img = np.zeros((6, 6), dtype=int) >>> img[1:5, 2:] = 1 >>> dist = compute_distance_map(img) >>> np.round(dist, 2) array([[ 2.24, 1.41, 1. , 1. , 1. , 1.41], [ 2. , 1. , 0. , 0. , 0. , 1. ], [ 2. , 1. , 0. , 1. , 1. , 1.41], [ 2. , 1. , 0. , 1. , 1. , 1.41], [ 2. , 1. , 0. , 0. , 0. , 1. ], [ 2.24, 1.41, 1. , 1. , 1. , 1.41]])
-
imsegm.labeling.
compute_labels_overlap_matrix
(seg1, seg2)[source]¶ compute overlap between tho segmentation atlasess) with same sizes
Parameters: - seg1 (ndarray) – np.array<height, width>
- seg2 (ndarray) – np.array<height, width>
Return ndarray: np.array<height, width>
>>> seg1 = np.zeros((7, 15), dtype=int) >>> seg1[1:4, 5:10] = 3 >>> seg1[5:7, 6:13] = 2 >>> seg2 = np.zeros((7, 15), dtype=int) >>> seg2[2:5, 7:12] = 1 >>> seg2[4:7, 7:14] = 3 >>> compute_labels_overlap_matrix(seg1, seg1) array([[76, 0, 0, 0], [ 0, 0, 0, 0], [ 0, 0, 14, 0], [ 0, 0, 0, 15]]) >>> compute_labels_overlap_matrix(seg1, seg2) array([[63, 4, 0, 9], [ 0, 0, 0, 0], [ 2, 0, 0, 12], [ 9, 6, 0, 0]])
-
imsegm.labeling.
contour_binary_map
(seg, label=1, include_boundary=False)[source]¶ get object boundaries
Parameters: Return ndarray: >>> img = np.zeros((6, 6), dtype=int) >>> img[1:5, 2:] = 1 >>> contour_binary_map(img) array([[0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 0], [0, 0, 1, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0]]) >>> contour_binary_map(img, include_boundary=True) array([[0, 0, 0, 0, 0, 0], [0, 0, 1, 1, 1, 1], [0, 0, 1, 0, 0, 1], [0, 0, 1, 0, 0, 1], [0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0]])
-
imsegm.labeling.
contour_coords
(seg, label=1, include_boundary=False)[source]¶ get object boundaries
Parameters: Return [[int, int]]: >>> img = np.zeros((6, 6), dtype=int) >>> img[1:5, 2:] = 1 >>> contour_coords(img) [[1, 2], [1, 3], [1, 4], [2, 2], [3, 2], [4, 2], [4, 3], [4, 4]] >>> contour_coords(img, include_boundary=True) #doctest: +NORMALIZE_WHITESPACE [[1, 2], [1, 3], [1, 4], [2, 2], [3, 2], [4, 2], [4, 3], [4, 4], [1, 5], [2, 5], [3, 5], [4, 5]]
-
imsegm.labeling.
convert_segms_2_list
(segms)[source]¶ convert segmentation to a list tha can be simpy user for standard evaluation (classification or clustering metrics)
Parameters: segms ([ndarray]) – list of segmentation Return list(int): >>> seg_pipe = np.ones((2, 3), dtype=int) >>> convert_segms_2_list([seg_pipe, seg_pipe * 0, seg_pipe * 2]) [1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 2]
-
imsegm.labeling.
histogram_regions_labels_counts
(slic, segm)[source]¶ histogram or overlaping region between two segmentations, the typical usage is label superpixel from annotation
Parameters: - slic (ndarray) – input superpixel segmenatation
- segm (ndarray) – reference segmentation
Return ndarray: >>> slic = np.array([[0] * 3 + [1] * 3 + [2] * 3] * 4 + ... [[4] * 3 + [5] * 3 + [6] * 3] * 4) >>> segm = np.zeros(slic.shape, dtype=int) >>> segm[4:, 5:] = 2 >>> histogram_regions_labels_counts(slic, segm) array([[ 12., 0., 0.], [ 12., 0., 0.], [ 12., 0., 0.], [ 0., 0., 0.], [ 12., 0., 0.], [ 8., 0., 4.], [ 0., 0., 12.]])
-
imsegm.labeling.
histogram_regions_labels_norm
(slic, segm)[source]¶ normalised histogram or overlapping region between two segmentation, the typical usage is label superpixel from annotation - relative overlap
Parameters: - slic (ndarray) – input superpixel segmentation
- segm (ndarray) – reference segmentation
Return ndarray: >>> slic = np.array([[0] * 3 + [1] * 3 + [2] * 3] * 4 + ... [[4] * 3 + [5] * 3 + [6] * 3] * 4) >>> segm = np.zeros(slic.shape, dtype=int) >>> segm[4:, 5:] = 2 >>> histogram_regions_labels_norm(slic, segm) # doctest: +ELLIPSIS array([[ 1. , 0. , 0. ], [ 1. , 0. , 0. ], [ 1. , 0. , 0. ], [ 0. , 0. , 0. ], [ 1. , 0. , 0. ], [ 0.66666667, 0. , 0.33333333], [ 0. , 0. , 1. ]])
-
imsegm.labeling.
mask_segm_labels
(img_labeling, labels, mask_init=None)[source]¶ with given labels image and list of desired labels it create mask finding all labels in the list (perform logical or on image with a list of labels)
Parameters: Return ndarray: np.array<height, width> bool mask
>>> img = np.zeros((4, 6)) >>> img[:-1, 1:] = 1 >>> img[1:2, 2:4] = 2 >>> mask_segm_labels(img, [1]) array([[False, True, True, True, True, True], [False, True, False, False, True, True], [False, True, True, True, True, True], [False, False, False, False, False, False]], dtype=bool) >>> mask_segm_labels(img, [2], np.full(img.shape, True, dtype=bool)) array([[ True, True, True, True, True, True], [ True, True, True, True, True, True], [ True, True, True, True, True, True], [ True, True, True, True, True, True]], dtype=bool)
-
imsegm.labeling.
merge_probab_labeling_2d
(proba, dict_labels)[source]¶ merging probability labeling
Parameters: Return ndarray: >>> p = np.ones((5, 5)) >>> proba = np.array([p * 0.3, p * 0.4, p * 0.2]) >>> proba = np.rollaxis(proba, 0, 3) >>> proba.shape (5, 5, 3) >>> proba_new = merge_probab_labeling_2d(proba, {0: [1, 2], 1: [0]}) >>> proba_new.shape (5, 5, 2) >>> proba_new[0, 0] array([ 0.6, 0.3])
-
imsegm.labeling.
neighbour_connect4
(seg, label, pos)[source]¶ check incoherent part of the segmentation
Parameters: Returns: >>> neighbour_connect4(np.eye(5), 1, (2, 2)) True >>> neighbour_connect4(np.ones((5, 5)), 1, (3, 3)) False
-
imsegm.labeling.
relabel_by_dict
(labels, dict_labels)[source]¶ relabel according given dictionary of new - old labels
Parameters: Return ndarray: >>> labels = np.array([2, 1, 0, 3, 3, 0, 2, 3, 0, 0]) >>> relabel_by_dict(labels, {0: [1, 2], 1: [0, 3]}).tolist() [0, 0, 1, 1, 1, 1, 0, 1, 1, 1]
-
imsegm.labeling.
relabel_max_overlap_merge
(seg_ref, seg_relabel, keep_bg=False)[source]¶ relabel the second segmentation cu that maximise relative overlap for each pattern (object), if one pattern in reference atlas is likely composed from multiple patterns in relabel atlas, it merge them
Note
it skips background class - 0
Parameters: - seg_ref (ndarray) – reference segmentation
- seg_relabel (ndarray) – segmentation for relabeling
- keep_bg (bool) – the label 0 holds
Return ndarray: resulting segentation
>>> atlas1 = np.zeros((7, 15), dtype=int) >>> atlas1[1:4, 5:10] = 1 >>> atlas1[5:7, 3:13] = 2 >>> atlas2 = np.zeros((7, 15), dtype=int) >>> atlas2[0:3, 7:12] = 1 >>> atlas2[3:7, 1:7] = 2 >>> atlas2[4:7, 7:14] = 3 >>> atlas2[:2, :3] = 5 >>> relabel_max_overlap_merge(atlas1, atlas2, keep_bg=True) array([[1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [0, 2, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0], [0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0], [0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0], [0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0]]) >>> relabel_max_overlap_merge(atlas2, atlas1, keep_bg=True) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0], [0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0]]) >>> relabel_max_overlap_merge(atlas1, atlas2, keep_bg=False) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 0], [0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 0], [0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 0]])
-
imsegm.labeling.
relabel_max_overlap_unique
(seg_ref, seg_relabel, keep_bg=False)[source]¶ relabel the second segmentation cu that maximise relative overlap for each pattern (object), the relation among patterns is 1-1
Note
it skips background class - 0
Parameters: - seg_ref (ndarray) – reference segmentation
- seg_relabel (ndarray) – segmentation for relabeling
- keep_bg (bool) – keep the background
Return ndarray: resulting segentation
>>> atlas1 = np.zeros((7, 15), dtype=int) >>> atlas1[1:4, 5:10] = 1 >>> atlas1[5:7, 3:13] = 2 >>> atlas2 = np.zeros((7, 15), dtype=int) >>> atlas2[0:3, 7:12] = 1 >>> atlas2[3:7, 1:7] = 2 >>> atlas2[4:7, 7:14] = 3 >>> atlas2[:2, :3] = 5 >>> relabel_max_overlap_unique(atlas1, atlas2, keep_bg=True) array([[5, 5, 5, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [5, 5, 5, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [0, 3, 3, 3, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0], [0, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 0], [0, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 0], [0, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 0]]) >>> relabel_max_overlap_unique(atlas2, atlas1, keep_bg=True) array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 0, 0], [0, 0, 0, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 0, 0]]) >>> relabel_max_overlap_unique(atlas1, atlas2, keep_bg=False) array([[5, 5, 5, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [5, 5, 5, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [0, 3, 3, 3, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0], [0, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 0], [0, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 0], [0, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 0]]) >>> atlas2[0, 0] = -1 >>> relabel_max_overlap_unique(atlas1, atlas2, keep_bg=True) array([[-1, 5, 5, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [ 5, 5, 5, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [ 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0], [ 0, 3, 3, 3, 3, 3, 3, 0, 0, 0, 0, 0, 0, 0, 0], [ 0, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 0], [ 0, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 0], [ 0, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 0]])
-
imsegm.labeling.
segm_labels_assignment
(segm, segm_gt)[source]¶ create labels assign to the particular regions
Parameters: - segm (ndarray) – input segmentation
- segm_gt (ndarray) – true segmentation
Returns: >>> slic = np.array([[0] * 3 + [1] * 3 + [2] * 3 + [3] * 3] * 4 + ... [[4] * 3 + [5] * 3 + [6] * 3 + [7] * 3] * 4) >>> segm = np.zeros(slic.shape, dtype=int) >>> segm[4:, 6:] = 1 >>> segm_labels_assignment(slic, segm) #doctest: +NORMALIZE_WHITESPACE {0: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 1: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 2: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 3: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 4: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 5: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 6: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 7: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
-
imsegm.labeling.
sequence_labels_merge
(labels_stack, dict_colors, labels_free, change_label=-1)[source]¶ the input is time series of labeled images and output idx labeled image with labels that was constant for all the time the special case is using free labels which can be assumed as any labeled
Example if labels series, {0, 1, 2} and 0 is free label: - 11111111 -> 1 - 11211211 -> CHANGE_LABEL - 10111100 -> 1 - 00000000 -> CHANGE_LABEL
Parameters: Return ndarray: np.array<height, width>
>>> dict_colors = {0: [], 1: [], 2: []} >>> sequence_labels_merge(np.zeros((8, 1, 1)), dict_colors, [0]) array([[-1]]) >>> sequence_labels_merge(np.ones((8, 1, 1)), dict_colors, [0]) array([[1]]) >>> sequence_labels_merge(np.array([[1], [1], [2], [1], [1], [1], [2], [1]]), dict_colors, [0]) array([-1]) >>> sequence_labels_merge(np.array([[1], [0], [1], [1], [1], [1], [0], [0]]), dict_colors, [0]) array([1])