tensorflow/models

View on GitHub
research/deeplab/model.py

Summary

Maintainability
F
6 days
Test Coverage

File model.py has 767 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Lint as: python2, python3
# Copyright 2018 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Severity: Major
Found in research/deeplab/model.py - About 1 day to fix

    Function extract_features has a Cognitive Complexity of 63 (exceeds 5 allowed). Consider refactoring.
    Open

    def extract_features(images,
                         model_options,
                         weight_decay=0.0001,
                         reuse=None,
                         is_training=False,
    Severity: Minor
    Found in research/deeplab/model.py - About 1 day to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function refine_by_decoder has a Cognitive Complexity of 36 (exceeds 5 allowed). Consider refactoring.
    Open

    def refine_by_decoder(features,
                          end_points,
                          crop_size=None,
                          decoder_output_stride=None,
                          decoder_use_separable_conv=False,
    Severity: Minor
    Found in research/deeplab/model.py - About 5 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function multi_scale_logits has a Cognitive Complexity of 28 (exceeds 5 allowed). Consider refactoring.
    Open

    def multi_scale_logits(images,
                           model_options,
                           image_pyramid,
                           weight_decay=0.0001,
                           is_training=False,
    Severity: Minor
    Found in research/deeplab/model.py - About 4 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function refine_by_decoder has 15 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def refine_by_decoder(features,
    Severity: Major
    Found in research/deeplab/model.py - About 1 hr to fix

      Function predict_labels_multi_scale has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
      Open

      def predict_labels_multi_scale(images,
                                     model_options,
                                     eval_scales=(1.0,),
                                     add_flipped_images=False):
        """Predicts segmentation labels.
      Severity: Minor
      Found in research/deeplab/model.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function get_branch_logits has 8 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def get_branch_logits(features,
      Severity: Major
      Found in research/deeplab/model.py - About 1 hr to fix

        Function multi_scale_logits has 7 arguments (exceeds 4 allowed). Consider refactoring.
        Open

        def multi_scale_logits(images,
        Severity: Major
        Found in research/deeplab/model.py - About 50 mins to fix

          Function _get_logits has 7 arguments (exceeds 4 allowed). Consider refactoring.
          Open

          def _get_logits(images,
          Severity: Major
          Found in research/deeplab/model.py - About 50 mins to fix

            Function extract_features has 7 arguments (exceeds 4 allowed). Consider refactoring.
            Open

            def extract_features(images,
            Severity: Major
            Found in research/deeplab/model.py - About 50 mins to fix

              Avoid deeply nested control flow statements.
              Open

                        if model_options.aspp_with_concat_projection:
                          concat_logits = slim.conv2d(
                              concat_logits, depth, 1, scope=CONCAT_PROJECTION_SCOPE)
                          concat_logits = slim.dropout(
                              concat_logits,
              Severity: Major
              Found in research/deeplab/model.py - About 45 mins to fix

                Function _decoder_with_sum_merge has 6 arguments (exceeds 4 allowed). Consider refactoring.
                Open

                def _decoder_with_sum_merge(decoder_features_list,
                Severity: Minor
                Found in research/deeplab/model.py - About 45 mins to fix

                  Avoid deeply nested control flow statements.
                  Open

                            if model_options.atrous_rates:
                              # Employ 3x3 convolutions with different atrous rates.
                              for i, rate in enumerate(model_options.atrous_rates, 1):
                                scope = ASPP_SCOPE + str(i)
                                if model_options.aspp_with_separable_conv:
                  Severity: Major
                  Found in research/deeplab/model.py - About 45 mins to fix

                    Avoid deeply nested control flow statements.
                    Open

                              if (model_options.add_image_level_feature and
                                  model_options.aspp_with_squeeze_and_excitation):
                                concat_logits *= image_feature
                    
                    
                    Severity: Major
                    Found in research/deeplab/model.py - About 45 mins to fix

                      Avoid deeply nested control flow statements.
                      Open

                                if model_options.add_image_level_feature:
                                  if model_options.crop_size is not None:
                                    image_pooling_crop_size = model_options.image_pooling_crop_size
                                    # If image_pooling_crop_size is not specified, use crop_size.
                                    if image_pooling_crop_size is None:
                      Severity: Major
                      Found in research/deeplab/model.py - About 45 mins to fix

                        Avoid deeply nested control flow statements.
                        Open

                                  if decoder_stage:
                                    scope_suffix = '_{}'.format(decoder_stage)
                                  for i, name in enumerate(feature_list):
                        Severity: Major
                        Found in research/deeplab/model.py - About 45 mins to fix

                          Avoid deeply nested control flow statements.
                          Open

                                    for i, name in enumerate(feature_list):
                                      decoder_features_list = [decoder_features]
                                      # MobileNet and NAS variants use different naming convention.
                                      if ('mobilenet' in model_variant or
                                          model_variant.startswith('mnas') or
                          Severity: Major
                          Found in research/deeplab/model.py - About 45 mins to fix

                            Function _decoder_with_concat_merge has 5 arguments (exceeds 4 allowed). Consider refactoring.
                            Open

                            def _decoder_with_concat_merge(decoder_features_list,
                            Severity: Minor
                            Found in research/deeplab/model.py - About 35 mins to fix

                              Function get_branch_logits has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
                              Open

                              def get_branch_logits(features,
                                                    num_classes,
                                                    atrous_rates=None,
                                                    aspp_with_batch_norm=False,
                                                    kernel_size=1,
                              Severity: Minor
                              Found in research/deeplab/model.py - About 35 mins to fix

                              Cognitive Complexity

                              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                              A method's cognitive complexity is based on a few simple rules:

                              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                              • Code is considered more complex for each "break in the linear flow of the code"
                              • Code is considered more complex when "flow breaking structures are nested"

                              Further reading

                              Function _get_logits has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
                              Open

                              def _get_logits(images,
                                              model_options,
                                              weight_decay=0.0001,
                                              reuse=None,
                                              is_training=False,
                              Severity: Minor
                              Found in research/deeplab/model.py - About 35 mins to fix

                              Cognitive Complexity

                              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                              A method's cognitive complexity is based on a few simple rules:

                              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                              • Code is considered more complex for each "break in the linear flow of the code"
                              • Code is considered more complex when "flow breaking structures are nested"

                              Further reading

                              Similar blocks of code found in 2 locations. Consider refactoring.
                              Open

                                    with slim.arg_scope(
                                        [slim.conv2d, slim.separable_conv2d],
                                        weights_regularizer=slim.l2_regularizer(weight_decay),
                              Severity: Major
                              Found in research/deeplab/model.py and 1 other location - About 1 hr to fix
                              research/deeplab/model.py on lines 693..695

                              Duplicated Code

                              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                              Tuning

                              This issue has a mass of 39.

                              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                              Refactorings

                              Further Reading

                              Similar blocks of code found in 2 locations. Consider refactoring.
                              Open

                                with slim.arg_scope(
                                    [slim.conv2d, slim.separable_conv2d],
                                    weights_regularizer=slim.l2_regularizer(weight_decay),
                              Severity: Major
                              Found in research/deeplab/model.py and 1 other location - About 1 hr to fix
                              research/deeplab/model.py on lines 441..443

                              Duplicated Code

                              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                              Tuning

                              This issue has a mass of 39.

                              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                              Refactorings

                              Further Reading

                              Similar blocks of code found in 2 locations. Consider refactoring.
                              Open

                                crop_width = (
                                    model_options.crop_size[1]
                                    if model_options.crop_size else tf.shape(images)[2])
                              Severity: Minor
                              Found in research/deeplab/model.py and 1 other location - About 35 mins to fix
                              research/deeplab/model.py on lines 262..264

                              Duplicated Code

                              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                              Tuning

                              This issue has a mass of 33.

                              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                              Refactorings

                              Further Reading

                              Similar blocks of code found in 2 locations. Consider refactoring.
                              Open

                                crop_height = (
                                    model_options.crop_size[0]
                                    if model_options.crop_size else tf.shape(images)[1])
                              Severity: Minor
                              Found in research/deeplab/model.py and 1 other location - About 35 mins to fix
                              research/deeplab/model.py on lines 265..267

                              Duplicated Code

                              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                              Tuning

                              This issue has a mass of 33.

                              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                              Refactorings

                              Further Reading

                              Similar blocks of code found in 2 locations. Consider refactoring.
                              Open

                                  if crop_size is None:
                                    crop_size = [tf.shape(images)[1], tf.shape(images)[2]]
                              Severity: Minor
                              Found in research/deeplab/model.py and 1 other location - About 30 mins to fix
                              research/efficient-hrl/agent.py on lines 137..137

                              Duplicated Code

                              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                              Tuning

                              This issue has a mass of 32.

                              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                              Refactorings

                              Further Reading

                              There are no issues that match your filters.

                              Category
                              Status