tensorflow/models

View on GitHub
research/object_detection/model_lib.py

Summary

Maintainability
F
1 wk
Test Coverage

Function create_model_fn has a Cognitive Complexity of 132 (exceeds 5 allowed). Consider refactoring.
Open

def create_model_fn(detection_model_fn,
                    configs,
                    hparams=None,
                    use_tpu=False,
                    postprocess_on_cpu=False):
Severity: Minor
Found in research/object_detection/model_lib.py - About 2 days to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

File model_lib.py has 997 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Major
Found in research/object_detection/model_lib.py - About 2 days to fix

    Function provide_groundtruth has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring.
    Open

    def provide_groundtruth(model, labels, training_step=None):
      """Provides the labels to a model as groundtruth.
    
      This helper function extracts the corresponding boxes, classes,
      keypoints, weights, masks, etc. from the labels, and provides it
    Severity: Minor
    Found in research/object_detection/model_lib.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _prepare_groundtruth_for_eval has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
    Open

    def _prepare_groundtruth_for_eval(detection_model, class_agnostic,
                                      max_number_of_boxes):
      """Extracts groundtruth data from detection_model and prepares it for eval.
    
      Args:
    Severity: Minor
    Found in research/object_detection/model_lib.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function create_estimator_and_inputs has 17 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def create_estimator_and_inputs(run_config,
    Severity: Major
    Found in research/object_detection/model_lib.py - About 2 hrs to fix

      Function create_estimator_and_inputs has a Cognitive Complexity of 15 (exceeds 5 allowed). Consider refactoring.
      Open

      def create_estimator_and_inputs(run_config,
                                      hparams=None,
                                      pipeline_config_path=None,
                                      config_override=None,
                                      train_steps=None,
      Severity: Minor
      Found in research/object_detection/model_lib.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function provide_groundtruth has 41 lines of code (exceeds 25 allowed). Consider refactoring.
      Open

      def provide_groundtruth(model, labels, training_step=None):
        """Provides the labels to a model as groundtruth.
      
        This helper function extracts the corresponding boxes, classes,
        keypoints, weights, masks, etc. from the labels, and provides it
      Severity: Minor
      Found in research/object_detection/model_lib.py - About 1 hr to fix

        Function unstack_batch has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
        Open

        def unstack_batch(tensor_dict, unpad_groundtruth_tensors=True):
          """Unstacks all tensors in `tensor_dict` along 0th dimension.
        
          Unstacks tensor from the tensor dict along 0th dimension and returns a
          tensor_dict containing values that are lists of unstacked, unpadded tensors.
        Severity: Minor
        Found in research/object_detection/model_lib.py - About 1 hr to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function create_estimator_and_inputs has 33 lines of code (exceeds 25 allowed). Consider refactoring.
        Open

        def create_estimator_and_inputs(run_config,
                                        hparams=None,
                                        pipeline_config_path=None,
                                        config_override=None,
                                        train_steps=None,
        Severity: Minor
        Found in research/object_detection/model_lib.py - About 1 hr to fix

          Function create_train_and_eval_specs has 8 arguments (exceeds 4 allowed). Consider refactoring.
          Open

          def create_train_and_eval_specs(train_input_fn,
          Severity: Major
          Found in research/object_detection/model_lib.py - About 1 hr to fix

            Function _evaluate_checkpoint has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
            Open

            def _evaluate_checkpoint(estimator,
                                     input_fn,
                                     checkpoint_path,
                                     name,
                                     max_retries=0):
            Severity: Minor
            Found in research/object_detection/model_lib.py - About 55 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function populate_experiment has 7 arguments (exceeds 4 allowed). Consider refactoring.
            Open

            def populate_experiment(run_config,
            Severity: Major
            Found in research/object_detection/model_lib.py - About 50 mins to fix

              Function continuous_eval has 6 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def continuous_eval(estimator,
              Severity: Minor
              Found in research/object_detection/model_lib.py - About 45 mins to fix

                Function continuous_eval_generator has 6 arguments (exceeds 4 allowed). Consider refactoring.
                Open

                def continuous_eval_generator(estimator,
                Severity: Minor
                Found in research/object_detection/model_lib.py - About 45 mins to fix

                  Function create_model_fn has 5 arguments (exceeds 4 allowed). Consider refactoring.
                  Open

                  def create_model_fn(detection_model_fn,
                  Severity: Minor
                  Found in research/object_detection/model_lib.py - About 35 mins to fix

                    Function _evaluate_checkpoint has 5 arguments (exceeds 4 allowed). Consider refactoring.
                    Open

                    def _evaluate_checkpoint(estimator,
                    Severity: Minor
                    Found in research/object_detection/model_lib.py - About 35 mins to fix

                      Function create_train_and_eval_specs has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                      Open

                      def create_train_and_eval_specs(train_input_fn,
                                                      eval_input_fns,
                                                      eval_on_train_input_fn,
                                                      predict_input_fn,
                                                      train_steps,
                      Severity: Minor
                      Found in research/object_detection/model_lib.py - About 25 mins to fix

                      Cognitive Complexity

                      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                      A method's cognitive complexity is based on a few simple rules:

                      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                      • Code is considered more complex for each "break in the linear flow of the code"
                      • Code is considered more complex when "flow breaking structures are nested"

                      Further reading

                      Similar blocks of code found in 3 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(
                            input_data_fields.groundtruth_not_exhaustive_classes):
                          groundtruth[input_data_fields.groundtruth_not_exhaustive_classes] = tf.pad(
                              tf.stack(
                                  detection_model.groundtruth_lists(
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 2 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 194..200
                      research/object_detection/model_lib.py on lines 230..236

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 46.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 3 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(
                            input_data_fields.groundtruth_verified_neg_classes):
                          groundtruth[input_data_fields.groundtruth_verified_neg_classes] = tf.pad(
                              tf.stack(
                                  detection_model.groundtruth_lists(
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 2 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 202..208
                      research/object_detection/model_lib.py on lines 230..236

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 46.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 3 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(
                            input_data_fields.groundtruth_labeled_classes):
                          groundtruth[input_data_fields.groundtruth_labeled_classes] = tf.pad(
                              tf.stack(
                                  detection_model.groundtruth_lists(
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 2 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 194..200
                      research/object_detection/model_lib.py on lines 202..208

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 46.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 9 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(fields.BoxListFields.is_crowd):
                          groundtruth[input_data_fields.groundtruth_is_crowd] = tf.stack(
                              detection_model.groundtruth_lists(fields.BoxListFields.is_crowd))
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 8 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 158..160
                      research/object_detection/model_lib.py on lines 170..172
                      research/object_detection/model_lib.py on lines 183..187
                      research/object_detection/model_lib.py on lines 189..191
                      research/object_detection/model_lib.py on lines 210..214
                      research/object_detection/model_lib.py on lines 215..219
                      research/object_detection/model_lib.py on lines 220..224
                      research/object_detection/model_lib.py on lines 226..228

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 43.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 9 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(
                            fields.BoxListFields.keypoint_visibilities):
                          groundtruth[input_data_fields.groundtruth_keypoint_visibilities] = tf.stack(
                              detection_model.groundtruth_lists(
                                  fields.BoxListFields.keypoint_visibilities))
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 8 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 158..160
                      research/object_detection/model_lib.py on lines 162..164
                      research/object_detection/model_lib.py on lines 170..172
                      research/object_detection/model_lib.py on lines 189..191
                      research/object_detection/model_lib.py on lines 210..214
                      research/object_detection/model_lib.py on lines 215..219
                      research/object_detection/model_lib.py on lines 220..224
                      research/object_detection/model_lib.py on lines 226..228

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 43.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 9 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(
                            fields.BoxListFields.densepose_num_points):
                          groundtruth[input_data_fields.groundtruth_dp_num_points] = tf.stack(
                              detection_model.groundtruth_lists(
                                  fields.BoxListFields.densepose_num_points))
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 8 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 158..160
                      research/object_detection/model_lib.py on lines 162..164
                      research/object_detection/model_lib.py on lines 170..172
                      research/object_detection/model_lib.py on lines 183..187
                      research/object_detection/model_lib.py on lines 189..191
                      research/object_detection/model_lib.py on lines 215..219
                      research/object_detection/model_lib.py on lines 220..224
                      research/object_detection/model_lib.py on lines 226..228

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 43.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 9 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(
                            fields.BoxListFields.densepose_surface_coords):
                          groundtruth[input_data_fields.groundtruth_dp_surface_coords] = tf.stack(
                              detection_model.groundtruth_lists(
                                  fields.BoxListFields.densepose_surface_coords))
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 8 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 158..160
                      research/object_detection/model_lib.py on lines 162..164
                      research/object_detection/model_lib.py on lines 170..172
                      research/object_detection/model_lib.py on lines 183..187
                      research/object_detection/model_lib.py on lines 189..191
                      research/object_detection/model_lib.py on lines 210..214
                      research/object_detection/model_lib.py on lines 215..219
                      research/object_detection/model_lib.py on lines 226..228

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 43.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 9 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(fields.BoxListFields.masks):
                          groundtruth[input_data_fields.groundtruth_instance_masks] = tf.stack(
                              detection_model.groundtruth_lists(fields.BoxListFields.masks))
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 8 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 162..164
                      research/object_detection/model_lib.py on lines 170..172
                      research/object_detection/model_lib.py on lines 183..187
                      research/object_detection/model_lib.py on lines 189..191
                      research/object_detection/model_lib.py on lines 210..214
                      research/object_detection/model_lib.py on lines 215..219
                      research/object_detection/model_lib.py on lines 220..224
                      research/object_detection/model_lib.py on lines 226..228

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 43.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 9 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(fields.BoxListFields.group_of):
                          groundtruth[input_data_fields.groundtruth_group_of] = tf.stack(
                              detection_model.groundtruth_lists(fields.BoxListFields.group_of))
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 8 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 158..160
                      research/object_detection/model_lib.py on lines 162..164
                      research/object_detection/model_lib.py on lines 170..172
                      research/object_detection/model_lib.py on lines 183..187
                      research/object_detection/model_lib.py on lines 210..214
                      research/object_detection/model_lib.py on lines 215..219
                      research/object_detection/model_lib.py on lines 220..224
                      research/object_detection/model_lib.py on lines 226..228

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 43.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 9 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(fields.BoxListFields.keypoints):
                          groundtruth[input_data_fields.groundtruth_keypoints] = tf.stack(
                              detection_model.groundtruth_lists(fields.BoxListFields.keypoints))
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 8 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 158..160
                      research/object_detection/model_lib.py on lines 162..164
                      research/object_detection/model_lib.py on lines 183..187
                      research/object_detection/model_lib.py on lines 189..191
                      research/object_detection/model_lib.py on lines 210..214
                      research/object_detection/model_lib.py on lines 215..219
                      research/object_detection/model_lib.py on lines 220..224
                      research/object_detection/model_lib.py on lines 226..228

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 43.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 9 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(
                            fields.BoxListFields.densepose_part_ids):
                          groundtruth[input_data_fields.groundtruth_dp_part_ids] = tf.stack(
                              detection_model.groundtruth_lists(
                                  fields.BoxListFields.densepose_part_ids))
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 8 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 158..160
                      research/object_detection/model_lib.py on lines 162..164
                      research/object_detection/model_lib.py on lines 170..172
                      research/object_detection/model_lib.py on lines 183..187
                      research/object_detection/model_lib.py on lines 189..191
                      research/object_detection/model_lib.py on lines 210..214
                      research/object_detection/model_lib.py on lines 220..224
                      research/object_detection/model_lib.py on lines 226..228

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 43.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 9 locations. Consider refactoring.
                      Open

                        if detection_model.groundtruth_has_field(fields.BoxListFields.track_ids):
                          groundtruth[input_data_fields.groundtruth_track_ids] = tf.stack(
                              detection_model.groundtruth_lists(fields.BoxListFields.track_ids))
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 8 other locations - About 1 hr to fix
                      research/object_detection/model_lib.py on lines 158..160
                      research/object_detection/model_lib.py on lines 162..164
                      research/object_detection/model_lib.py on lines 170..172
                      research/object_detection/model_lib.py on lines 183..187
                      research/object_detection/model_lib.py on lines 189..191
                      research/object_detection/model_lib.py on lines 210..214
                      research/object_detection/model_lib.py on lines 215..219
                      research/object_detection/model_lib.py on lines 220..224

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 43.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 7 locations. Consider refactoring.
                      Open

                            if fields.InputDataFields.image_additional_channels in features:
                              eval_dict[fields.InputDataFields.image_additional_channels] = features[
                                  fields.InputDataFields.image_additional_channels]
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 6 other locations - About 1 hr to fix
                      research/object_detection/legacy/evaluator.py on lines 105..107
                      research/object_detection/inputs.py on lines 737..739
                      research/object_detection/inputs.py on lines 740..742
                      research/object_detection/inputs.py on lines 743..745
                      research/object_detection/inputs.py on lines 746..748
                      research/object_detection/inputs.py on lines 749..751

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 39.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 5 locations. Consider refactoring.
                      Open

                      MODEL_BUILD_UTIL_MAP = {
                          'get_configs_from_pipeline_file':
                              config_util.get_configs_from_pipeline_file,
                          'create_pipeline_proto_from_configs':
                              config_util.create_pipeline_proto_from_configs,
                      Severity: Major
                      Found in research/object_detection/model_lib.py and 4 other locations - About 45 mins to fix
                      official/projects/pix2seq/modeling/transformer.py on lines 318..325
                      research/deep_speech/deep_speech.py on lines 252..259
                      research/object_detection/builders/model_builder.py on lines 132..146
                      research/object_detection/builders/model_builder.py on lines 240..254

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 35.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Identical blocks of code found in 2 locations. Consider refactoring.
                      Open

                            eval_dict = eval_util.result_dict_for_batched_example(
                                eval_images,
                                features[inputs.HASH_KEY],
                                detections,
                                groundtruth,
                      Severity: Minor
                      Found in research/object_detection/model_lib.py and 1 other location - About 45 mins to fix
                      research/object_detection/model_lib_v2.py on lines 813..817

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 35.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      There are no issues that match your filters.

                      Category
                      Status