imsegm.annotation module¶
Framework for handling annotations
Copyright (C) 2014-2018 Jiri Borovec <jiri.borovec@fel.cvut.cz>
- imsegm.annotation.convert_img_colors_to_labels(img_rgb, lut_label_color)[source]¶
take a RGB image and dictionary of labels and apply this dictionary it returns relabels image according given dictionary
- Parameters
- Return ndarray
np.array<height, width> labeling
>>> np.random.seed(0) >>> seg = np.random.randint(0, 2, (5, 7)) >>> img = np.array([(0.2, 0.2, 0.2), (0.9, 0.9, 0.9)])[seg] >>> d_lb_clr = {0: (0.2, 0.2, 0.2), 1: (0.9, 0.9, 0.9)} >>> convert_img_colors_to_labels(img, d_lb_clr) array([[0, 1, 1, 0, 1, 1, 1], [1, 1, 1, 1, 0, 0, 1], [0, 0, 0, 0, 0, 1, 0], [1, 1, 0, 0, 1, 1, 1], [1, 0, 1, 0, 1, 0, 1]])
- imsegm.annotation.convert_img_colors_to_labels_reverted(img_rgb, dict_color_label)[source]¶
take a RGB image and dictionary of labels and apply this dictionary it returns relabels image according given dictionary
- Parameters
img_rgb (ndarray) – np.array<height, width, 3> input RGB image
{tuple(int,int,int) – int} dict_color_label:
- Return ndarray
np.array<height, width> labeling
>>> np.random.seed(0) >>> seg = np.random.randint(0, 2, (5, 7)) >>> img = np.array([(0.2, 0.2, 0.2), (0.9, 0.9, 0.9)])[seg] >>> d_clr_lb = {(0.2, 0.2, 0.2): 0, (0.9, 0.9, 0.9): 1} >>> convert_img_colors_to_labels_reverted(img, d_clr_lb) array([[0, 1, 1, 0, 1, 1, 1], [1, 1, 1, 1, 0, 0, 1], [0, 0, 0, 0, 0, 1, 0], [1, 1, 0, 0, 1, 1, 1], [1, 0, 1, 0, 1, 0, 1]])
- imsegm.annotation.convert_img_labels_to_colors(segm, lut_label_colors)[source]¶
convert labeling according given dictionary of colors
- Parameters
- Return ndarray
np.array<height, width, 3>
>>> np.random.seed(0) >>> seg = np.random.randint(0, 2, (5, 7)) >>> d_lb_clr = {0: (0.2, 0.2, 0.2), 1: (0.9, 0.9, 0.9)} >>> img = convert_img_labels_to_colors(seg, d_lb_clr) >>> img[:, :, 0] array([[ 0.2, 0.9, 0.9, 0.2, 0.9, 0.9, 0.9], [ 0.9, 0.9, 0.9, 0.9, 0.2, 0.2, 0.9], [ 0.2, 0.2, 0.2, 0.2, 0.2, 0.9, 0.2], [ 0.9, 0.9, 0.2, 0.2, 0.9, 0.9, 0.9], [ 0.9, 0.2, 0.9, 0.2, 0.9, 0.2, 0.9]])
- imsegm.annotation.group_images_frequent_colors(paths_img, ratio_threshold=0.001)[source]¶
look all images and estimate most frequent colours
- Parameters
- Return list(int)
>>> from skimage import data >>> from imsegm.utilities.data_io import io_imsave >>> path_img = './sample-image.png' >>> io_imsave(path_img, data.astronaut()) >>> d_clrs = group_images_frequent_colors([path_img], ratio_threshold=3e-4) >>> sorted([d_clrs[c] for c in d_clrs], reverse=True) [27969, 1345, 1237, 822, 450, 324, 313, 244, 229, 213, 163, 160, 158, 157, 150, 137, 120, 119, 117, 114, 98, 92, 92, 91, 81] >>> os.remove(path_img)
- imsegm.annotation.image_color_2_labels(img, colors=None)[source]¶
quantize input image according given list of possible colours
- Parameters
- Return ndarray
np.array<height, width>
>>> np.random.seed(0) >>> rand = np.random.randint(0, 2, (5, 7)).astype(np.uint8) >>> img = np.rollaxis(np.array([rand] * 3), 0, 3) >>> image_color_2_labels(img) array([[1, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 1, 1, 0], [1, 1, 1, 1, 1, 0, 1], [0, 0, 1, 1, 0, 0, 0], [0, 1, 0, 1, 0, 1, 0]]...)
- imsegm.annotation.image_frequent_colors(img, ratio_threshold=0.001)[source]¶
look all images and estimate most frequent colours
- Parameters
img (ndarray) – np.array<height, width, 3>
ratio_threshold (float) – percentage of nb color pixels to be assumed as important
- Return {tuple(int,int,int) int}
>>> np.random.seed(0) >>> img = np.random.randint(0, 2, (50, 50, 3)).astype(np.uint8) >>> d = image_frequent_colors(img) >>> sorted(d.keys()) [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] >>> sorted(d.values()) [271, 289, 295, 317, 318, 330, 335, 345]
- imsegm.annotation.load_info_group_by_slices(path_txt, stages, pos_columns=('ant_x', 'ant_y', 'post_x', 'post_y', 'lat_x', 'lat_y'), dict_slice_tol={1: 1, 2: 2, 3: 2, 4: 3, 5: 3, 6: 0})[source]¶
load all info and group position info according name if stack
- Parameters
- Returns
DF
>>> from imsegm.utilities.data_io import update_path >>> path_txt = os.path.join(update_path('data-images'), 'drosophila_ovary_slice', 'info_ovary_images.txt') >>> df = load_info_group_by_slices(path_txt, [4]) >>> df.sort_index(axis=1) ant_x ant_y lat_x lat_y post_x post_y image insitu7569 [298] [327] [673] [411] [986] [155]
- imsegm.annotation.quantize_image_nearest_color(img, colors)[source]¶
quantize input image according given list of possible colours
- Parameters
- Return ndarray
np.array<height, width, 3>
>>> np.random.seed(0) >>> img = np.random.randint(0, 2, (5, 7, 3)).astype(np.uint8) >>> im = quantize_image_nearest_color(img, [(0, 0, 0), (1, 1, 1)]) >>> im[:, :, 0] array([[1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 1, 0], [1, 1, 0, 1, 1, 0, 1], [0, 0, 1, 0, 1, 0, 1], [1, 1, 1, 0, 1, 0, 0]], dtype=uint8) >>> [np.array_equal(im[:, :, 0], im[:, :, i]) for i in [1, 2]] [True, True]
- imsegm.annotation.quantize_image_nearest_pixel(img, colors)[source]¶
quantize input image according given list of possible colours
- Parameters
- Return ndarray
np.array<height, width, 3>
>>> np.random.seed(0) >>> img = np.random.randint(0, 2, (5, 7, 3)).astype(np.uint8) >>> im = quantize_image_nearest_pixel(img, [(0, 0, 0), (1, 1, 1)]) >>> im[:, :, 0] array([[1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0]]) >>> [np.array_equal(im[:, :, 0], im[:, :, i]) for i in [1, 2]] [True, True]
- imsegm.annotation.unique_image_colors(img)[source]¶
find all unique color in image and return its list
- Parameters
img (ndarray) – np.array<height, width, 3>
- Returns
list(tuple(int,int,int))
>>> np.random.seed(0) >>> img = np.random.randint(0, 2, (50, 50, 3)) >>> unique_image_colors(img) [(1, 0, 0), (1, 1, 0), (0, 1, 0), (1, 1, 1), (0, 1, 1), (0, 0, 1), (1, 0, 1), (0, 0, 0)] >>> img = np.random.randint(0, 256, (150, 150, 3)) >>> unique_image_colors(img) [...]
- imsegm.annotation.ANNOT_SLICE_DIST_TOL = {1: 1, 2: 2, 3: 2, 4: 3, 5: 3, 6: 0}[source]¶
set distance in Z axis whether near slice may still belong to the same egg