tensorflow/models

View on GitHub
official/projects/yolo/ops/loss_utils.py

Summary

Maintainability
F
3 days
Test Coverage

File loss_utils.py has 426 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2024 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Minor
Found in official/projects/yolo/ops/loss_utils.py - About 6 hrs to fix

    Function get_predicted_box has 10 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def get_predicted_box(width,
    Severity: Major
    Found in official/projects/yolo/ops/loss_utils.py - About 1 hr to fix

      Function _search_body has 8 arguments (exceeds 4 allowed). Consider refactoring.
      Open

        def _search_body(self, pred_box, pred_class, boxes, classes, running_boxes,
      Severity: Major
      Found in official/projects/yolo/ops/loss_utils.py - About 1 hr to fix

        Function get_predicted_box has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
        Open

        def get_predicted_box(width,
                              height,
                              encoded_boxes,
                              anchor_grid,
                              grid_points,
        Severity: Minor
        Found in official/projects/yolo/ops/loss_utils.py - About 55 mins to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function _loop_cond has 7 arguments (exceeds 4 allowed). Consider refactoring.
        Open

            def _loop_cond(unused_pred_box, unused_pred_class, boxes, unused_classes,
        Severity: Major
        Found in official/projects/yolo/ops/loss_utils.py - About 50 mins to fix

          Function _darknet_boxes has 7 arguments (exceeds 4 allowed). Consider refactoring.
          Open

          def _darknet_boxes(encoded_boxes, width, height, anchor_grid, grid_points,
          Severity: Major
          Found in official/projects/yolo/ops/loss_utils.py - About 50 mins to fix

            Function _darknet_new_coord_boxes has 7 arguments (exceeds 4 allowed). Consider refactoring.
            Open

            def _darknet_new_coord_boxes(encoded_boxes, width, height, anchor_grid,
            Severity: Major
            Found in official/projects/yolo/ops/loss_utils.py - About 50 mins to fix

              Function _scale_boxes has 6 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def _scale_boxes(encoded_boxes, width, height, anchor_grid, grid_points,
              Severity: Minor
              Found in official/projects/yolo/ops/loss_utils.py - About 45 mins to fix

                Function _new_coord_scale_boxes has 6 arguments (exceeds 4 allowed). Consider refactoring.
                Open

                def _new_coord_scale_boxes(encoded_boxes, width, height, anchor_grid,
                Severity: Minor
                Found in official/projects/yolo/ops/loss_utils.py - About 45 mins to fix

                  Function _anchor_free_scale_boxes has 6 arguments (exceeds 4 allowed). Consider refactoring.
                  Open

                  def _anchor_free_scale_boxes(encoded_boxes,
                  Severity: Minor
                  Found in official/projects/yolo/ops/loss_utils.py - About 45 mins to fix

                    Function build_grid has 6 arguments (exceeds 4 allowed). Consider refactoring.
                    Open

                    def build_grid(indexes, truths, preds, ind_mask, update=False, grid=None):
                    Severity: Minor
                    Found in official/projects/yolo/ops/loss_utils.py - About 45 mins to fix

                      Function __call__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
                      Open

                        def __call__(self,
                      Severity: Minor
                      Found in official/projects/yolo/ops/loss_utils.py - About 35 mins to fix

                        Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
                        Open

                          def __init__(self,
                        Severity: Minor
                        Found in official/projects/yolo/ops/loss_utils.py - About 35 mins to fix

                          Function _search_body has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                          Open

                            def _search_body(self, pred_box, pred_class, boxes, classes, running_boxes,
                                             running_classes, max_iou, idx):
                              """Main search fn."""
                          
                              # capture the batch size to be used, and gather a slice of
                          Severity: Minor
                          Found in official/projects/yolo/ops/loss_utils.py - About 25 mins to fix

                          Cognitive Complexity

                          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                          A method's cognitive complexity is based on a few simple rules:

                          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                          • Code is considered more complex for each "break in the linear flow of the code"
                          • Code is considered more complex when "flow breaking structures are nested"

                          Further reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                              if self._track_classes:
                                running_classes = tf.expand_dims(running_classes, axis=-1)
                                class_slice = tf.zeros_like(running_classes) + class_slice
                                class_slice = tf.concat([running_classes, class_slice], axis=-1)
                                running_classes = tf.gather_nd(class_slice, ind, batch_dims=4)
                          Severity: Major
                          Found in official/projects/yolo/ops/loss_utils.py and 1 other location - About 3 hrs to fix
                          official/projects/yolo/ops/loss_utils.py on lines 312..316

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 72.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                              if self._track_boxes:
                                running_boxes = tf.expand_dims(running_boxes, axis=-2)
                                box_slice = tf.zeros_like(running_boxes) + box_slice
                                box_slice = tf.concat([running_boxes, box_slice], axis=-2)
                                running_boxes = tf.gather_nd(box_slice, ind, batch_dims=4)
                          Severity: Major
                          Found in official/projects/yolo/ops/loss_utils.py and 1 other location - About 3 hrs to fix
                          official/projects/yolo/ops/loss_utils.py on lines 318..322

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 72.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                                x_left = tf.tile(
                                    tf.transpose(tf.expand_dims(x, axis=-1), perm=[1, 0]), [lheight, 1])
                          Severity: Major
                          Found in official/projects/yolo/ops/loss_utils.py and 1 other location - About 1 hr to fix
                          official/projects/centernet/ops/nms_ops.py on lines 68..69

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 38.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          There are no issues that match your filters.

                          Category
                          Status