tensorflow/models

View on GitHub
research/object_detection/utils/per_image_evaluation.py

Summary

Maintainability
F
4 days
Test Coverage

File per_image_evaluation.py has 620 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Major
Found in research/object_detection/utils/per_image_evaluation.py - About 1 day to fix

    Function _compute_tp_fp_for_single_class has a Cognitive Complexity of 36 (exceeds 5 allowed). Consider refactoring.
    Open

      def _compute_tp_fp_for_single_class(self,
                                          detected_boxes,
                                          detected_scores,
                                          groundtruth_boxes,
                                          groundtruth_is_difficult_list,
    Severity: Minor
    Found in research/object_detection/utils/per_image_evaluation.py - About 5 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _compute_is_class_correctly_detected_in_image has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
    Open

      def _compute_is_class_correctly_detected_in_image(self,
                                                        detected_boxes,
                                                        detected_scores,
                                                        groundtruth_boxes,
                                                        detected_masks=None,
    Severity: Minor
    Found in research/object_detection/utils/per_image_evaluation.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _compute_tp_fp has 9 arguments (exceeds 4 allowed). Consider refactoring.
    Open

      def _compute_tp_fp(self,
    Severity: Major
    Found in research/object_detection/utils/per_image_evaluation.py - About 1 hr to fix

      Function compute_object_detection_metrics has 9 arguments (exceeds 4 allowed). Consider refactoring.
      Open

        def compute_object_detection_metrics(self,
      Severity: Major
      Found in research/object_detection/utils/per_image_evaluation.py - About 1 hr to fix

        Function _get_ith_class_arrays has 8 arguments (exceeds 4 allowed). Consider refactoring.
        Open

          def _get_ith_class_arrays(self, detected_boxes, detected_scores,
        Severity: Major
        Found in research/object_detection/utils/per_image_evaluation.py - About 1 hr to fix

          Function _compute_cor_loc has 7 arguments (exceeds 4 allowed). Consider refactoring.
          Open

            def _compute_cor_loc(self,
          Severity: Major
          Found in research/object_detection/utils/per_image_evaluation.py - About 50 mins to fix

            Function _compute_tp_fp_for_single_class has 7 arguments (exceeds 4 allowed). Consider refactoring.
            Open

              def _compute_tp_fp_for_single_class(self,
            Severity: Major
            Found in research/object_detection/utils/per_image_evaluation.py - About 50 mins to fix

              Function _get_overlaps_and_scores_mask_mode has 6 arguments (exceeds 4 allowed). Consider refactoring.
              Open

                def _get_overlaps_and_scores_mask_mode(self, detected_boxes, detected_scores,
              Severity: Minor
              Found in research/object_detection/utils/per_image_evaluation.py - About 45 mins to fix

                Function _compute_is_class_correctly_detected_in_image has 5 arguments (exceeds 4 allowed). Consider refactoring.
                Open

                  def _compute_is_class_correctly_detected_in_image(self,
                Severity: Minor
                Found in research/object_detection/utils/per_image_evaluation.py - About 35 mins to fix

                  Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
                  Open

                    def __init__(self,
                  Severity: Minor
                  Found in research/object_detection/utils/per_image_evaluation.py - About 35 mins to fix

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                            is_evaluatable = (not tp_fp_labels[i] and
                                              not is_matched_to_difficult[i] and
                                              iou[i, gt_id] >= self.matching_iou_threshold and
                                              not is_matched_to_group_of[i])
                    Severity: Major
                    Found in research/object_detection/utils/per_image_evaluation.py and 1 other location - About 2 hrs to fix
                    research/object_detection/utils/per_image_evaluation.py on lines 559..562

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 59.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                            is_evaluatable = (not tp_fp_labels[i] and
                                              not is_matched_to_difficult[i] and
                                              ioa[i, gt_id] >= self.matching_iou_threshold and
                                              not is_matched_to_group_of[i])
                    Severity: Major
                    Found in research/object_detection/utils/per_image_evaluation.py and 1 other location - About 2 hrs to fix
                    research/object_detection/utils/per_image_evaluation.py on lines 521..524

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 59.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                        if (detected_masks is not None and
                            groundtruth_masks is None) or (detected_masks is None and
                                                           groundtruth_masks is not None):
                          raise ValueError(
                    Severity: Major
                    Found in research/object_detection/utils/per_image_evaluation.py and 1 other location - About 2 hrs to fix
                    research/object_detection/core/preprocessor.py on lines 663..665

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 50.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                          return np.concatenate(
                              (scores[valid_entries], scores_mask_group_of)), np.concatenate(
                                  (tp_fp_labels[valid_entries].astype(float),
                                   tp_fp_labels_mask_group_of))
                    Severity: Major
                    Found in research/object_detection/utils/per_image_evaluation.py and 1 other location - About 1 hr to fix
                    research/object_detection/utils/per_image_evaluation.py on lines 620..623

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 41.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                          return np.concatenate(
                              (scores[valid_entries], scores_box_group_of)), np.concatenate(
                                  (tp_fp_labels[valid_entries].astype(float),
                                   tp_fp_labels_box_group_of))
                    Severity: Major
                    Found in research/object_detection/utils/per_image_evaluation.py and 1 other location - About 1 hr to fix
                    research/object_detection/utils/per_image_evaluation.py on lines 614..617

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 41.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                        if ioa.shape[1] > 0:
                          scores_box_group_of, tp_fp_labels_box_group_of = compute_match_ioa(
                    Severity: Minor
                    Found in research/object_detection/utils/per_image_evaluation.py and 1 other location - About 40 mins to fix
                    research/object_detection/utils/per_image_evaluation.py on lines 589..590

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 34.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                                if not is_gt_detected[gt_id]:
                                  tp_fp_labels[i] = True
                                  is_gt_detected[gt_id] = True
                                  is_matched_to_box[i] = is_box
                    Severity: Minor
                    Found in research/object_detection/utils/per_image_evaluation.py and 1 other location - About 40 mins to fix
                    research/object_detection/metrics/coco_evaluation.py on lines 1848..1851

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 34.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                        if ioa_mask.shape[1] > 0:
                          scores_mask_group_of, tp_fp_labels_mask_group_of = compute_match_ioa(
                    Severity: Minor
                    Found in research/object_detection/utils/per_image_evaluation.py and 1 other location - About 40 mins to fix
                    research/object_detection/utils/per_image_evaluation.py on lines 606..607

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 34.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    There are no issues that match your filters.

                    Category
                    Status