model
deepdataspace.model
This module defines data models over mongodb.
base
deepdataspace.model._base
This module defines the common model apis.
- class BaseModel[source]
Bases:
BaseModel
Base model for all models.Every model represents a mongodb collection.- db: ClassVar = Database(MongoClient(host=['127.0.0.1:9801'], document_class=dict, tz_aware=False, connect=True, authmechanism='SCRAM-SHA-256'), 'dds')
- cache: ClassVar = {}
- abstract classmethod get_collection(*args, **kwargs) Collection[_DocumentType] [source]
Derived model class should implement this function to get the mongodb collection.
- to_dict(include: Optional[list] = None, exclude: Optional[list] = None)[source]
Convert a model object to a python dict.
- classmethod convert_id_for_python(data: dict)[source]
Convert the mongo ‘_id’ field to ‘id’ field, without the prefix underscore.
- classmethod convert_id_for_mongo(data: dict)[source]
Convert the python ‘id’ field to ‘_id’ field, with the prefix underscore.
- classmethod find_many(filters: dict, includes: Optional[dict] = None, sort: Optional[List[Tuple[str, int]]] = None, skip: Optional[int] = None, size: Optional[int] = None, to_dict: bool = False)[source]
Find objects matching the filters, retuning an iterable generator.
- Parameters:
filters – the filters to match. This is the same as mongodb filter parameter, except that it will convert ‘id’ to ‘_id’ before a mongodb query.
includes – the fields to include in the result.
sort – a list of sort conditions. Every condition is a tuple of (field_name, sort_order). sort_order 1 for ascending, -1 for descending.
skip – the number of documents to skip.
size – the number of documents to return.
to_dict – If true, python dicts will be yield instead of model objects. The performance is better if we are returning a large number of objects in a json response.
- classmethod update_one(filters: dict, set_data: Optional[dict] = None, unset_data: Optional[dict] = None)[source]
Update one object matching the filters.
- Parameters:
filters – the filters to match.
set_data – the fields to set.
unset_data – the fields to delete.
- classmethod update_many(filters: dict, set_data: Optional[dict] = None, unset_data: Optional[dict] = None)[source]
Update all objects matching the filters.
- classmethod batch_update(filters: dict, set_data: Optional[dict] = None, unset_data: Optional[dict] = None, batch_size: int = 20)[source]
This is almost the same as update_one, except that it will batch the update operations. The performance is better if we are updating a large number of objects.
- Parameters:
filters – the filters to match.
set_data – the fields to set.
unset_data – the fields to delete.
batch_size – the batch size. We will only send the update operations to mongodb when the batch is full.
- classmethod finish_batch_update()[source]
Send all the update operations left in batch queue to mongodb. This must be called after all the batch_update calls.
- save(refresh=False)[source]
Save current object to mongodb. If refresh is True, the object will be re-fetched from mongodb after saving.
- batch_save(batch_size: int = 20, set_on_insert: Optional[Dict] = None)[source]
The same as self.save function, but the performance is better if we are saving a large number of objects.
- Parameters:
batch_size – the batch size. We will only write to mongodb when the batch is full.
set_on_insert – the fields only need to be set when we are inserting a new object.
category
deepdataspace.model.category
The category model.
dataset
deepdataspace.model.dataset
The dataset model.
- class DataSet(*, name: str, id: str = None, path: str = None, type: str = None, status: str = 'waiting', detail_status: dict = {}, flag_export_link: str = None, object_types: list = [], num_images: int = 0, files: dict = {}, cover_url: str = None, description: str = None, description_func: str = None, group_id: str = None, group_name: str = None)[source]
- DataSet is a collection of images.This only saves metadata of the dataset, not the images.Every dataset has a corresponding individual collection to save the images.
Attributes:
- name: str
The dataset name.
- id: str
The dataset id.
- path: str
The dataset directory path.
- type: str
The dataset type, see
deepdataspace.constants.DatasetType
.- status: str
The current status of the dataset, with default being DatasetStatus.Waiting. See
deepdataspace.constants.DatasetStatus
.- detail_status: dict
Detailed status of every importer/processor. See
deepdataspace.constants.DatasetStatus
.- flag_export_link: str
The dataset flag export link.
- object_types: list
List indicating what kind of objects this dataset contains. See
deepdataspace.constants.AnnotationType
.- num_images: int
The number of images in this dataset.
- files: dict
Dictionary containing the relevant files of this dataset.
- cover_url: str
The cover image URL.
- description: str
The dataset description.
- description_func: callable
A function used to generate the description for this dataset.
- group_id: str
The group id associated with this dataset.
- group_name: str
The group name associated with this dataset.
- classmethod get_collection(*args, **kwargs) Collection[_DocumentType] [source]
Datasets are stored in the datasets collection.
- classmethod create_dataset(name: str, id_: Optional[str] = None, type: Optional[str] = None, path: Optional[str] = None, files: Optional[dict] = None, description: Optional[str] = None, description_func: Optional[str] = None) DataSet [source]
Create a dataset. Multiple datasets can have the same name. If you want to create a unique dataset, please specify a unique id value.
- Parameters:
name – the dataset name. Multiple datasets can have the same name.
id – the optional dataset id. If provided, a unique dataset will be created with the id value.
type – the optional dataset type, can be “tsv”, “coco2017”.
path – the optional dataset directory path.
files – the optional dataset relevant files. The key is the file info, the value is the file path.
description – the optional dataset description.
description_func – an import path of a function to generate description. The function takes the dataset instance as the only argument and returns a string. If this is provided, it proceeds the description str.
- Returns:
the dataset object.
- classmethod get_importing_dataset(name: str, id_: Optional[str] = None, type: Optional[str] = None, path: Optional[str] = None, files: Optional[dict] = None) DataSet [source]
This is the same as create_dataset. But if the dataset is new, it’s status will be set to “waiting” instead of “ready”.
- add_image(uri: str, thumb_uri: Optional[str] = None, width: Optional[int] = None, height: Optional[int] = None, id: Optional[int] = None, metadata: Optional[dict] = None, flag: int = 0, flag_ts: int = 0) ImageModel [source]
Add an image to the dataset. The same image will be added to the dataset multiple times if the same uri is provided without the same image id.
- Parameters:
uri – the image uri, can be a local file path stars with “file://” or a remote url starts with “http://”.
thumb_uri – the image thumbnail uri, also can be a local file path or a remote url.
width – the image width of full resolution.
height – the image height of full resolution.
id – the image id, if not provided, the image id will be the current number of images in the dataset.
metadata – any information data need to be stored.
flag – the image flag, 0 for not flagged, 1 for positive, 2 for negative.
flag_ts – the image flag timestamp.
- Returns:
the image object.
- batch_add_image(uri: str, thumb_uri: Optional[str] = None, width: Optional[int] = None, height: Optional[int] = None, id_: Optional[int] = None, metadata: Optional[dict] = None, flag: int = 0, flag_ts: int = 0) ImageModel [source]
This is the batch version of add_image, which optimizes database performance. But this method is not thread safe, please make sure only one thread is calling this method. And after the batch add is finished, please call finish_batch_add_image to save the changes to database.
- Parameters:
uri – the image uri, can be a local file path stars with “file://” or a remote url starts with “http://”.
thumb_uri – the image thumbnail uri, also can be a local file path or a remote url.
width – the image width of full resolution.
height – the image height of full resolution.
id – the image id, if not provided, the image id will be the current number of images in the dataset.
metadata – any information data need to be stored.
flag – the image flag, 0 for not flagged, 1 for positive, 2 for negative.
flag_ts – the image flag timestamp.
- Returns:
the image object, the flag indicating whether the batch is saved to db.
image
deepdataspace.model.image
The image model.
- class ImageModel(*, id: int, idx: int, url: str, dataset_id: str, type: str = None, url_full_res: str = '', objects: List[Object] = [], width: int = None, height: int = None, metadata: str = '{}', flag: int = 0, flag_ts: int = 0, num_fn: dict = {}, num_fn_cat: dict = {}, num_fp: dict = {}, num_fp_cat: dict = {}, label_confirm: dict = {})[source]
- Image is the element of a dataset.Each image contains a list of objects.The image model is designed differently from other models.In the normal condition, every model refers to one and only one mongodb collection.But the image model refers to multiple mongodb collections, one for each dataset.This will improve the performance of the image query for large datasets.But this also changes the behaviors of ImageModel:
The ImageModel class is created dynamically before accessing the mongodb collection.
While creating the ImageModel class, the dataset id is passed in as a class attribute ‘belong_dataset’.
The get_collection and get_cls_id methods will decide the return value along with the ‘belong_dataset’.
So the image model is designed to be used in this way:IModel = Image(dataset_id='xxxx') # the additional step to create the ImageModel class dynamically image = IModel(...) image.save()
Let’s say we have two datasets, A and B:Both dataSet A and B are stored in collection “datasets”
Images belong to DataSet A are stored in collection
f"images@{dataset_A.id}"
Images belong to DataSet B are stored in collection
f"images@{dataset_B.id}"
Attributes:
- id: int
The image id.
- idx: int
The image sorting field.
- url: str
The image URL.
- dataset_id: str
Which dataset this image belongs to.
- type: str
What kind of dataset this image belongs to. Default is None. See
deepdataspace.constants.DatasetType
.- url_full_res: str
The image URL of full resolution. Default is an empty string.
- objects: List[Object]
The objects in this image. Default is an empty list.
- width: int
The image width. Default is None.
- height: int
The image height. Default is None.
- metadata: str
The image metadata. Default is “{}”.
- flag: int
The image flag, values can be 0,1,2. Default is 0.
- flag_ts: int
The image flag timestamp. Default is 0.
- num_fn: dict
fn counter of image in the format {“label_id”: {90:x, 80: y, …, 10: z}}. Default is an empty dict.
- num_fn_cat: dict
fn counter of image categorized, in the format {“label_id”: {“category_id: {90:x, 80: y, …, 10: z}}}. Default is an empty dict.
- num_fp: dict
fp counter of image in the format {“label_id”: {90:x, 80: y, …, 10: z}}. Default is an empty dict.
- num_fp_cat: dict
fp counter of image categorized, in the format {“label_id”: {“category_id: {90:x, 80: y, …, 10: z}}}. Default is an empty dict.
- label_confirm: dict
Confirm status of every label sets, where confirm can be: 0 = not confirmed, 1 = confirmed, 2 = refine required. Format is {“label_id”: {“confirm”: int, “confirm_ts”: int}}. Default is an empty dict.
- classmethod get_collection()[source]
Instead of returning a collection for all dataset, return a collection for each dataset.
- classmethod get_cls_id()[source]
Instead of returning the class name directly, return the class name with dataset id.
- classmethod from_dict(data: dict)[source]
This is almost the same as the BaseModel.from_dict method, except that it will set the idx field by id value if idx is not set.
- add_annotation(category: str, label: str = LabelName.GroundTruth, label_type: Literal['GT', 'Pred', 'User'] = 'GT', conf: float = 1.0, is_group: bool = False, bbox: Optional[Tuple[int, int, int, int]] = None, segmentation: Optional[List[List[int]]] = None, alpha_uri: Optional[str] = None, keypoints: Optional[List[Union[float, int]]] = None, keypoint_colors: Optional[List[int]] = None, keypoint_skeleton: Optional[List[int]] = None, keypoint_names: Optional[List[str]] = None, caption: Optional[str] = None, confirm_type: int = 0)[source]
Add an annotation to the image.
- Parameters:
category – the category name.
label – the label name.
conf – the confidence of the annotation.
is_group – whether the annotation is a group.
label_type – the label type, GT, Pred, User.
bbox – the bounding box of the annotation, (x1, y1, w, h).
segmentation – the segmentation of the annotation, [[l1p1, l1p2, …], [l2p1, l2p2, …]].
alpha_uri – the alpha uri of the annotation, either a local path or a remote url.
keypoints – the key points, [x1, y1, v1, conf1, x2, y2, v2, conf2, …]. v stands for visibility, 0 = not labeled, 1 = labeled but not visible, 2 = visible; conf stands for confidence, and it should always be 1.0 for ground truth.
keypoint_names – the key point names, [“nose”, “left_eye”, …].
keypoint_colors – the key point colors, [255, 0, 0, …].
keypoint_skeleton – the key point skeleton, [0, 1, 2, …].
caption – the caption of the annotation.
confirm_type – the confirm_type of the annotation, 0 = not confirmed, 1 = gt may be fn, 2 = pred may be fp
- batch_add_annotation(category: str, label: str = LabelName.GroundTruth, label_type: Literal['GT', 'Pred', 'User'] = 'GT', conf: float = 1.0, is_group: bool = False, bbox: Optional[Tuple[int, int, int, int]] = None, segmentation: Optional[List[List[int]]] = None, alpha_uri: Optional[str] = None, keypoints: Optional[List[Union[float, int]]] = None, keypoint_colors: Optional[List[int]] = None, keypoint_skeleton: Optional[List[int]] = None, keypoint_names: Optional[List[str]] = None, caption: Optional[str] = None, confirm_type: int = 0)[source]
The batch version of add_annotation. The performance is better if we are saving a lot of annotations. But this does not guarantee the dataset data consistency before the DataSet.finish_batch_add_image is called. So this function must be used in a batch add image context like this:
for image_data in images: image = dataset.batch_add_image(**image_data) for annotation_data in annotations: image.batch_add_annotation(**annotation_data) dataset.finish_batch_add+image()
- Parameters:
category – the category name.
label – the label name.
conf – the confidence of the annotation.
is_group – whether the annotation is a group.
label_type – the label type, GT, Pred, User.
bbox – the bounding box of the annotation, (x1, y1, w, h).
segmentation – the segmentation of the annotation, [[l1p1, l1p2, …], [l2p1, l2p2, …]].
alpha_uri – the alpha uri of the annotation, either a local path or a remote url.
keypoints – the key points, [x1, y1, v1, conf1, x2, y2, v2, conf2, …]. v stands for visibility, 0 = not labeled, 1 = labeled but not visible, 2 = visible; conf stands for confidence, and it should always be 1.0 for ground truth.
keypoint_names – the key point names, [“nose”, “left_eye”, …].
keypoint_colors – the key point colors, [255, 0, 0, …].
keypoint_skeleton – the key point skeleton, [0, 1, 2, …].
caption – the caption of the annotation.
confirm_type – the confirm_type of the annotation, 0 = not confirmed, 1 = gt may be fn, 2 = pred may be fp
- Returns:
None
- Image(dataset_id: str) Type[ImageModel] [source]
A shortcut to get the ImageModel for specified dataset.
label
deepdataspace.model.label
The label model.
- class Label(*, name: str, id: str = '', type: str = '', dataset_id: str = '', compare_precisions: List = [], clone_from_label: str = '')[source]
- Label, or Label Set, or Prediction Set, is a set of predictions made to images of a dataset at the same time.GroundTruth and UserAnnotation are special label sets.
Attributes:
- name: str
The label name.
- id: str
The label id.
- type: str
Is it a prediction? a GroundTruth? or a user annotation?, see
deepdataspace.constants.LabelType
.- dataset_id: str
The dataset id this label belongs to.
- compare_precisions: list
Pre-calculated thresh conf for comparing prediction to gt.
- clone_from_label: str
Which label set this label is cloned from.
label_task
deepdataspace.model.label_task
The label project related models.
- exception LabelProjectError(code: int, msg: str, http_status: int)[source]
The label project related error.
- exception LabelTaskError(code: int, msg: str, http_status: int)[source]
The label task related error.
- class LabelProject(*, id: str, name: str, datasets: List[dict], created_ts: int, owner: dict, managers: List[Dict], description: str = '', status: str = 'waiting', batch_size: int = None, label_times: int = None, review_times: int = None, task_num_total: int = 0, task_num_waiting: int = 0, task_num_working: int = 0, task_num_reviewing: int = 0, task_num_rejected: int = 0, task_num_accepted: int = 0, categories: str = '', pre_label: str = None)[source]
- The label project model.Each label project is associated with one or more datasets, and one project owner and several managers.The project will distribute the datasets to label tasks, which are labeled by labelers and reviewed by reviewers,
which are lead by label leaders and review leaders.
- classmethod get_collection(*args, **kwargs) Collection[_DocumentType] [source]
Label projects are stored in the “label_projects” collection.
- classmethod create_project(name: str, owner: User, datasets: List[DataSet], managers: List[User], categories: List[str], description: str = '', pre_label: Optional[str] = None) LabelProject [source]
Create a label project.
- Parameters:
name – the project name.
owner – the project owner.
datasets – the project datasets, which cannot be empty list.
managers – the project managers, which cannot be empty list.
categories – the categories for classification and annotation task, which cannot be empty list.
description – the project description
pre_label – the pre label set to be imported as default labels
- edit_project(desc: Optional[str] = None, managers: Optional[List[User]] = None)[source]
Edit project description and/or managers. :param desc: the project description, if None, then it won’t be updated. :param managers: the project managers, if None, then it won’t be updated. Otherwise, it cannot be an empty list.
- init_project(batch_size: Optional[int] = None, label_times: Optional[int] = None, review_times: Optional[int] = None)[source]
Init project with configurations. Each project can be inited only once.
- Parameters:
batch_size – the number of images in a task, if 0, then all images of a dataset are in a task.
label_times – the number of labelers to label every image in a task.
review_times – the number of reviewers to review every label of every labeler of a task.
- update_subtask_counter()[source]
Update the number of tasks in different status. This is done by mongodb aggregation.
- class ProjectRole(*, id: str, project_id: str, user_id: str, role: str)[source]
Every user has one or more roles in a project. This model defines common interfaces for project roles.
- classmethod get_collection(*args, **kwargs) Collection[_DocumentType] [source]
Derived model class should implement this function to get the mongodb collection.
- classmethod add_role(project: LabelProject, user_id: str, role: str)[source]
Assign a role to a user in a project.
- classmethod add_roles(project: LabelProject, user_ids: List[str], role: str)[source]
Assign a role to a list of users in a project.
- classmethod del_role(project: LabelProject, user_id: str, role: str)[source]
Delete a role of a user in a project.
- classmethod del_roles(project: LabelProject, user_ids: List[str], role: str)[source]
Delete a role of a list of users in a project.
- static is_member(user: User, project_id: str)[source]
Check if target user has any role in the project.
- static is_owner(user: User, project_id: str)[source]
Check if target user is the owner of the project.
- static is_manager(user: User, project_id: str)[source]
Check if target user is the manager of the project.
- static is_leader(user: User, project_id: str)[source]
Check if target user is the leader of the project.
- static is_gte_leader(user: User, project_id: str)[source]
Check if target user bears any role above or equal to leader in the project.
- static is_gt_leader(user: User, project_id: str)[source]
Check if target user bears any role above leader in the project.
- static is_label_leader(user: User, project_id: str)[source]
Check if target user is the label leader of the project.
- static is_review_leader(user: User, project_id: str)[source]
Check if target user is the review leader of the project.
- static is_worker(user: User, project_id: str)[source]
Check if target user is a worker of the project.
- static is_label_worker(user: User, project_id: str)[source]
Check if target user is a label worker of the project.
- static is_review_worker(user: User, project_id: str)[source]
Check if target user is a review worker of the project.
- static can_edit_project(user: User, project_id: str)[source]
Check if target user can edit the project.
- static can_view_project_progress(user: User, project_id)[source]
Check if target user can view the project progress.
- static can_assign_leader(user: User, project_id)[source]
Check if target user can assign a leader to the project.
- class TaskRole(*, id: str, project_id: str, task_id: str, user_id: str, user_name: str, role: str, is_active: bool = True, label_num_waiting: int = 0, review_num_waiting: int = 0, review_num_rejected: int = 0, review_num_accepted: int = 0, label_completed: bool = False, review_completed: bool = False)[source]
The role of a user in a task. Each project can contain multiple tasks, and users can be assigned to different roles in different tasks.
- classmethod get_collection(*args, **kwargs) Collection[_DocumentType] [source]
Derived model class should implement this function to get the mongodb collection.
- classmethod init_roles(task: LabelTask, users: List[User], role: str) List[TaskRole] [source]
Initialize the role of a task, assign the role to users. - check pre-conditions - create task role and project role for every user - init role data on every image of the task
Task roles can only be set in two ways: - init_roles: grant one role to all target users in the same time - replace_role: replace one user with another user for a role
- classmethod replace_role(task: LabelTask, old_user: User, new_user: User, role: str)[source]
Reassign the role of task from old user to new user, transfer role data of task and task images accordingly.
- static is_task_label_leader(user: User, task_id: str)[source]
Check if target user is label leader of task.
- static is_task_review_leader(user: User, task_id: str)[source]
Check if target user is review leader of task.
- static can_init_label_worker(user: User, task_id)[source]
Check if target user can init label worker for task.
- static can_init_review_worker(user: User, task_id)[source]
Check if target user can init review worker for task.
- static can_replace_label_worker(user: User, task_id)[source]
Check if target user can replace label worker for task.
- static can_replace_review_worker(user: User, task_id)[source]
Check if target user can replace review worker for task.
- static can_commit_review(user: User, task_id)[source]
Check if target user can commit review for task.
- static can_view_all_roles(user: User, project_id)[source]
Check if target user can view all roles’ data.
- static update_progress_for_all_roles(task_id)[source]
Update progress for all roles of task. This ensures the progress for every role is up-to-date, without concerning the data integrity and consistency.
count image status for every role, update their count number
for every role, label_completed = False, review_completed = False
for every role, label_completed = True if label_num_waiting == 0 and review_num_rejected == 0
for every role, review_completed = True if project.review_times == 0 review_num_accepted == task.num_total
task status = LabelTaskStatus.Reviewing if all(role.label_completed is True and role.review_completed is True for role in roles)
project update subtask progress
- class LabelTask(*, id: str, idx: int, project_id: str, dataset_id: str, created_ts: int, num_total: int = 0, status: str = 'waiting')[source]
The label task model.
- classmethod get_collection(*args, **kwargs) Collection[_DocumentType] [source]
Derived model class should implement this function to get the mongodb collection.
- set_leader(leader: User, role: str)[source]
Set leader for the task. Leader role can be either label leader or review leader.
- init_workers(workers: List[User], role: str)[source]
Init workers for the task, set all workers of role type at the same time. Worker role can be either labeler or reviewer.
- replace_worker(old_user: User, new_user: User, role: str)[source]
Designate the old user from role, assign the new user to role.
- class UserLabelData(*, user_id: str, user_name: str, annotations: List[Dict] = [], id: str = None, created_ts: int = None)[source]
The user label data model. This does not refer to a mongodb collection directly, but is used as a data serializer.
- class UserReviewData(*, user_id: str, user_name: str, action: str, label_id: str, id: str = None, created_ts: int = None)[source]
The user review data model. This does not refer to a mongodb collection directly, but is used as a data serializer.
- class LabelTaskImageModel(*, id: str, idx: int, image_id: int, task_id: str, url: str, url_full_res: str, default_labels: UserLabelData = [], labels: Dict[str, List[UserLabelData]] = {}, reviews: Dict[str, List[UserReviewData]] = {}, role_status: Dict = {})[source]
The label task image model. This model behaves like ImageModel, but is used for label task. So to use this model, you should create a LabelTaskImageModel class dynamically with the LabelTaskImage shortcut.
- classmethod get_collection(*args, **kwargs) Collection[_DocumentType] [source]
Instead of returning a collection for all dataset, return a collection for each dataset.
- classmethod get_cls_id()[source]
Instead of returning the class name directly, return the class name with dataset id.
- ensure_status_for_labeling(task: LabelTask, labeler: User)[source]
Check if target labeler can set label for target task.
Labeler cannot set label in any of these conditions: * the task is not in working status. * his label is accepted by all reviewer.
- set_label(task: LabelTask, labeler: User, label_annotations: List[Dict])[source]
Update the label annotations for a labeler.
- Parameters:
task – the task this image belongs to.
labeler – the labeler who are updating this image labels.
label_annotations – the label annotations.
- Returns:
the label data dict.
A sample label_annotations:
label_annotations = [ { "category_name": "str", "category_id" : "str", "bounding_box" : { "xmin": float, "ymin": float, "xmax": float, "ymax": float, } } ]
- ensure_status_for_reviewing(task: LabelTask, reviewer: User, label_id: str)[source]
- Reviewer cannot set review in any of these conditions:
the task is not in working status.
image is not labeled by all labelers.
the target label does not exist
reviewer has reviewed target label before.
- set_review(task: LabelTask, reviewer: User, label_id: str, action: str)[source]
Update the review for a label.
- Parameters:
task – the task this image belongs to.
reviewer – the reviewer who are reviewing target label.
label_id – the target label id that reviewer is reviewing.
action – the review action.
- LabelTaskImage(dataset_id: str) Type[LabelTaskImageModel] [source]
A shortcut to create the LabelTaskImageModel for target dataset.
object
deepdataspace.model.object
The object model.
- class Object(*, label_name: str, label_type: str, label_id: str = '', category_id: str = '', category_name: str = '', conf: Union[float, int] = 1.0, is_group: Optional[bool] = False, bounding_box: Optional[Dict[str, Union[float, int]]] = {}, segmentation: Optional[str] = '', alpha: Optional[str] = '', points: Optional[List[Union[float, int]]] = [], lines: Optional[List[int]] = [], point_colors: Optional[List[int]] = [], point_names: Optional[List[str]] = [], caption: Optional[str] = '', confirm_type: Optional[int] = 0, compare_result: Optional[Dict[str, str]] = {}, matched_det_idx: Optional[int] = None)[source]
Objects are predictions, ground truths, or user annotations of an image. It is not stored in mongodb collections directly, but saved as nested documents in the image document.
Attributes:
- label_name: str
The label name.
- label_type: str
Is it a prediction? a GroundTruth? or a user annotation?, see
deepdataspace.constants.LabelType
.- label_id: str
The label id.
- category_id: str
The category id.
- category_name: str
The category name.
- conf: float
The confidence of the prediction.
- is_group: bool
Is it a group of objects?
- bounding_box: dict
The bounding box of the object, {“xmin”: 0, “ymin”: 0, “xmax”: 0, “ymax”: 0}.
- segmentation: str
The segmentation of the object.
- alpha: str
The alpha of the object.
- points: list
The points of the object.
- lines: list
The lines of the object.
- point_colors: list
The point colors of the object.
- point_names: list
The point names of the object.
- caption: str
The caption of the object.
- confirm_type: int
The image confirm type, 0 for unconfirmed, 1 for confirmed, 2 for rejected.
- compare_result: dict
The compare result of the object, {“90”: “FP”, …, “10”: “OK”}.
- matched_det_idx: int
The matched ground truth index, for prediction objects only.
user
deepdataspace.model.user
The user related models.
- exception IntegrationError
- class UserToken(*, id: str, user_id: str, expire: int)[source]
The session token for a logged-in user.
Attributes:
- id: str
The token id.
- user_id: str
The user id this token bond to.
- expire: int
The token expire timestamp, in second.
- class User(*, id: str, name: str, password: str, status: str, is_staff: bool = False)[source]
The user model.
Attributes:
- id: str
The user id.
- name: str
The username.
- password: str
The password, encrypted.
- status: str
The user status, active or inactive, see
deepdataspace.constants.UserStatus
.- is_staff: bool
Is this user a staff?
- classmethod create_user(username: str, is_staff: bool = False) User [source]
Create a user by username, set a random password for the user.