tensorflow/models

View on GitHub
research/cognitive_planning/envs/active_vision_dataset_env.py

Summary

Maintainability
F
1 wk
Test Coverage

File active_vision_dataset_env.py has 912 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2018 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Major
Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 2 days to fix

    Function __init__ has a Cognitive Complexity of 37 (exceeds 5 allowed). Consider refactoring.
    Open

      def __init__(
          self,
          episode_length,
          modality_types,
          confidence_threshold,
    Severity: Minor
    Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 5 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _update_graph has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
    Open

      def _update_graph(self):
        """Creates the graph for each environment and updates the _cur_graph."""
        if self._cur_world not in self._graph_cache:
          graph = nx.DiGraph()
          id_to_index = {}
    Severity: Minor
    Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    ActiveVisionDatasetEnv has 30 functions (exceeds 20 allowed). Consider refactoring.
    Open

    class ActiveVisionDatasetEnv(task_env.TaskEnv):
      """Simulates the environment from ActiveVisionDataset."""
      cached_data = None
    
      def __init__(
    Severity: Minor
    Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 3 hrs to fix

      Function _reset_env has a Cognitive Complexity of 20 (exceeds 5 allowed). Consider refactoring.
      Open

        def _reset_env(
            self,
            new_world=None,
            new_goal=None,
            new_image_id=None,
      Severity: Minor
      Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function __init__ has 60 lines of code (exceeds 25 allowed). Consider refactoring.
      Open

        def __init__(
            self,
            episode_length,
            modality_types,
            confidence_threshold,
      Severity: Major
      Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 2 hrs to fix

        Function __init__ has 19 arguments (exceeds 4 allowed). Consider refactoring.
        Open

          def __init__(
        Severity: Major
        Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 2 hrs to fix

          Function _compute_goal_indexes has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
          Open

            def _compute_goal_indexes(self):
              """Computes the goal indexes for the environment.
          
              Returns:
                The indexes of the goals that are closest to target categories. A vertex
          Severity: Minor
          Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function observation has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
          Open

            def observation(self, view_pose):
              """Returns the observation at the given the vertex.
          
              Args:
                view_pose: pose of the view of interest.
          Severity: Minor
          Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function _step_no_reward has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
          Open

            def _step_no_reward(self, action):
              """Performs a step in the environment with given action.
          
              Args:
                action: Action that is used to step in the environment. Action can be
          Severity: Minor
          Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function _largest_detection_for_image has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
          Open

            def _largest_detection_for_image(self, image_id, detections_dict):
              """Assigns area of the largest box for the view with given image id.
          
              Args:
                image_id: Image id of the view.
          Severity: Minor
          Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Avoid deeply nested control flow statements.
          Open

                      for image_id in data[world]:
                        self._eval_init_points.append((world, image_id[0], goal))
                  logging.info('loaded %d eval init points', len(self._eval_init_points))
          Severity: Major
          Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 45 mins to fix

            Function read_cached_data has 5 arguments (exceeds 4 allowed). Consider refactoring.
            Open

            def read_cached_data(should_load_images, dataset_root, segmentation_file_name,
            Severity: Minor
            Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 35 mins to fix

              Function generate_detection_image has 5 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def generate_detection_image(detections,
              Severity: Minor
              Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 35 mins to fix

                Function read_cached_data has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
                Open

                def read_cached_data(should_load_images, dataset_root, segmentation_file_name,
                                     targets_file_name, output_size):
                  """Reads all the necessary cached data.
                
                  Args:
                Severity: Minor
                Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 35 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function random_step_sequence has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                Open

                  def random_step_sequence(self, min_len=None, max_len=None):
                    """Generates random step sequence that takes agent to the goal.
                
                    Args:
                      min_len: integer, minimum length of a step sequence. Not yet implemented.
                Severity: Minor
                Found in research/cognitive_planning/envs/active_vision_dataset_env.py - About 25 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                    if task_env.ModalityTypes.IMAGE in self._modality_types:
                      obs_shapes[task_env.ModalityTypes.IMAGE] = gym.spaces.Box(
                          low=0, high=255, shape=(self._output_size, self._output_size, 3))
                research/cognitive_planning/envs/active_vision_dataset_env.py on lines 474..476

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 55.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                    if task_env.ModalityTypes.SEMANTIC_SEGMENTATION in self._modality_types:
                      obs_shapes[task_env.ModalityTypes.SEMANTIC_SEGMENTATION] = gym.spaces.Box(
                          low=0, high=255, shape=(self._output_size, self._output_size, 1))
                research/cognitive_planning/envs/active_vision_dataset_env.py on lines 488..490

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 55.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                    if task_env.ModalityTypes.DEPTH in self._modality_types:
                      output[task_env.ModalityTypes.DEPTH] = self._depth_images[
                          self._cur_world][image_id]
                research/cognitive_planning/envs/active_vision_dataset_env.py on lines 622..625

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 47.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                    if task_env.ModalityTypes.SEMANTIC_SEGMENTATION in self._modality_types:
                      output[task_env.ModalityTypes.
                             SEMANTIC_SEGMENTATION] = self._semantic_segmentations[
                                 self._cur_world][image_id]
                research/cognitive_planning/envs/active_vision_dataset_env.py on lines 643..645

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 47.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                  z = [xyz[0][i][2][0] for i in range(n)]
                research/cognitive_planning/envs/active_vision_dataset_env.py on lines 211..211

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 41.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                  x = [xyz[0][i][0][0] for i in range(n)]
                research/cognitive_planning/envs/active_vision_dataset_env.py on lines 212..212

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 41.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                SUPPORTED_MODALITIES = [
                    task_env.ModalityTypes.SEMANTIC_SEGMENTATION,
                    task_env.ModalityTypes.DEPTH,
                    task_env.ModalityTypes.OBJECT_DETECTION,
                    task_env.ModalityTypes.IMAGE,
                Severity: Minor
                Found in research/cognitive_planning/envs/active_vision_dataset_env.py and 1 other location - About 50 mins to fix
                research/object_detection/utils/object_detection_evaluation.py on lines 776..783

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 36.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Identical blocks of code found in 2 locations. Consider refactoring.
                Open

                    self._prev_action = np.zeros((len(self._actions) + 1,), dtype=np.float32)
                Severity: Minor
                Found in research/cognitive_planning/envs/active_vision_dataset_env.py and 1 other location - About 35 mins to fix
                research/cognitive_planning/envs/active_vision_dataset_env.py on lines 733..733

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 33.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Identical blocks of code found in 2 locations. Consider refactoring.
                Open

                    self._prev_action = np.zeros((len(self._actions) + 1,), dtype=np.float32)
                Severity: Minor
                Found in research/cognitive_planning/envs/active_vision_dataset_env.py and 1 other location - About 35 mins to fix
                research/cognitive_planning/envs/active_vision_dataset_env.py on lines 850..850

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 33.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                There are no issues that match your filters.

                Category
                Status