tensorflow/models

View on GitHub
research/object_detection/eval_util.py

Summary

Maintainability
F
1 wk
Test Coverage

File eval_util.py has 1067 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Major
Found in research/object_detection/eval_util.py - About 2 days to fix

    Function _run_checkpoint_once has a Cognitive Complexity of 46 (exceeds 5 allowed). Consider refactoring.
    Open

    def _run_checkpoint_once(tensor_dict,
                             evaluators=None,
                             batch_processor=None,
                             checkpoint_dirs=None,
                             variables_to_restore=None,
    Severity: Minor
    Found in research/object_detection/eval_util.py - About 7 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function result_dict_for_batched_example has a Cognitive Complexity of 41 (exceeds 5 allowed). Consider refactoring.
    Open

    def result_dict_for_batched_example(images,
                                        keys,
                                        detections,
                                        groundtruth=None,
                                        class_agnostic=False,
    Severity: Minor
    Found in research/object_detection/eval_util.py - About 6 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function get_evaluators has a Cognitive Complexity of 26 (exceeds 5 allowed). Consider refactoring.
    Open

    def get_evaluators(eval_config, categories, evaluator_options=None):
      """Returns the evaluator class according to eval_config, valid for categories.
    
      Args:
        eval_config: An `eval_pb2.EvalConfig`.
    Severity: Minor
    Found in research/object_detection/eval_util.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function evaluator_options_from_eval_config has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
    Open

    def evaluator_options_from_eval_config(eval_config):
      """Produces a dictionary of evaluation options for each eval metric.
    
      Args:
        eval_config: An `eval_pb2.EvalConfig`.
    Severity: Minor
    Found in research/object_detection/eval_util.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function repeated_checkpoint_run has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
    Open

    def repeated_checkpoint_run(tensor_dict,
                                summary_dir,
                                evaluators,
                                batch_processor=None,
                                checkpoint_dirs=None,
    Severity: Minor
    Found in research/object_detection/eval_util.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function repeated_checkpoint_run has 17 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def repeated_checkpoint_run(tensor_dict,
    Severity: Major
    Found in research/object_detection/eval_util.py - About 2 hrs to fix

      Function visualize_detection_results has 14 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def visualize_detection_results(result_dict,
      Severity: Major
      Found in research/object_detection/eval_util.py - About 1 hr to fix

        Function _run_checkpoint_once has 13 arguments (exceeds 4 allowed). Consider refactoring.
        Open

        def _run_checkpoint_once(tensor_dict,
        Severity: Major
        Found in research/object_detection/eval_util.py - About 1 hr to fix

          Function result_dict_for_batched_example has 10 arguments (exceeds 4 allowed). Consider refactoring.
          Open

          def result_dict_for_batched_example(images,
          Severity: Major
          Found in research/object_detection/eval_util.py - About 1 hr to fix

            Function visualize_detection_results has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
            Open

            def visualize_detection_results(result_dict,
                                            tag,
                                            global_step,
                                            categories,
                                            summary_dir='',
            Severity: Minor
            Found in research/object_detection/eval_util.py - About 1 hr to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Avoid deeply nested control flow statements.
            Open

                      if cat['name'] == class_label:
                        category = cat
                        break
                    if not category:
            Severity: Major
            Found in research/object_detection/eval_util.py - About 45 mins to fix

              Function result_dict_for_single_example has 6 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def result_dict_for_single_example(image,
              Severity: Minor
              Found in research/object_detection/eval_util.py - About 45 mins to fix

                Function result_dict_for_single_example has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
                Open

                def result_dict_for_single_example(image,
                                                   key,
                                                   detections,
                                                   groundtruth=None,
                                                   class_agnostic=False,
                Severity: Minor
                Found in research/object_detection/eval_util.py - About 35 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                  if original_image_spatial_shapes is None:
                    original_image_spatial_shapes = tf.tile(
                        tf.expand_dims(tf.shape(images)[1:3], axis=0),
                        multiples=[tf.shape(images)[0], 1])
                  else:
                Severity: Major
                Found in research/object_detection/eval_util.py and 1 other location - About 6 hrs to fix
                research/object_detection/eval_util.py on lines 898..905

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 107.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                  if true_image_shapes is None:
                    true_image_shapes = tf.tile(
                        tf.expand_dims(tf.shape(images)[1:4], axis=0),
                        multiples=[tf.shape(images)[0], 1])
                  else:
                Severity: Major
                Found in research/object_detection/eval_util.py and 1 other location - About 6 hrs to fix
                research/object_detection/eval_util.py on lines 887..894

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 107.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Identical blocks of code found in 2 locations. Consider refactoring.
                Open

                          try:
                            if not losses_dict:
                              losses_dict = {}
                            result_dict, result_losses_dict = sess.run([tensor_dict,
                                                                        losses_dict])
                Severity: Major
                Found in research/object_detection/eval_util.py and 1 other location - About 1 hr to fix
                research/object_detection/legacy/evaluator.py on lines 232..236

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 44.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                      if eval_metric_fn_key == 'lvis_mask_metrics' and hasattr(
                          eval_config, 'export_path'):
                        evaluator_options[eval_metric_fn_key].update({
                            'export_path': eval_config.export_path
                Severity: Major
                Found in research/object_detection/eval_util.py and 1 other location - About 1 hr to fix
                research/object_detection/eval_util.py on lines 1191..1195

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 41.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                        if (batch + 1) % 100 == 0:
                          tf.logging.info('Running eval ops batch %d/%d', batch + 1,
                                          num_batches)
                Severity: Major
                Found in research/object_detection/eval_util.py and 1 other location - About 1 hr to fix
                research/adversarial_text/evaluate.py on lines 95..96

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 41.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                      if eval_metric_fn_key == 'coco_detection_metrics' and hasattr(
                          eval_config, 'skip_predictions_for_unlabeled_class'):
                        evaluator_options[eval_metric_fn_key].update({
                            'skip_predictions_for_unlabeled_class':
                                (eval_config.skip_predictions_for_unlabeled_class)
                Severity: Major
                Found in research/object_detection/eval_util.py and 1 other location - About 1 hr to fix
                research/object_detection/eval_util.py on lines 1203..1206

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 41.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                    vis_utils.visualize_boxes_and_labels_on_image_array(
                Severity: Minor
                Found in research/object_detection/eval_util.py and 1 other location - About 45 mins to fix
                official/recommendation/ncf_common.py on lines 150..150

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 35.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                           tf.concat([tf.zeros([1], dtype=tf.int32),
                                      pad_shape-original_image_shape], axis=0),
                Severity: Minor
                Found in research/object_detection/eval_util.py and 1 other location - About 40 mins to fix
                research/object_detection/eval_util.py on lines 580..581

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 34.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                  pad_hw_dim = tf.concat([tf.zeros([1], dtype=tf.int32),
                                          pad_shape - image_shape], axis=0)
                Severity: Minor
                Found in research/object_detection/eval_util.py and 1 other location - About 40 mins to fix
                research/object_detection/eval_util.py on lines 644..645

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 34.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                There are no issues that match your filters.

                Category
                Status