DeepRegNet/DeepReg

View on GitHub

Showing 89 of 113 total issues

File layer.py has 520 lines of code (exceeds 250 allowed). Consider refactoring.
Open

"""This module defines custom layers."""
import itertools
from typing import List, Tuple, Union

import numpy as np
Severity: Major
Found in deepreg/model/layer.py - About 1 day to fix

    File interface.py has 513 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    """
    Interface between the data loaders and file loaders.
    """
    
    from abc import ABC
    Severity: Major
    Found in deepreg/dataset/loader/interface.py - About 1 day to fix

      File u_net.py has 508 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      # coding=utf-8
      
      from typing import List, Optional, Tuple, Union
      
      import tensorflow as tf
      Severity: Major
      Found in deepreg/model/backbone/u_net.py - About 1 day to fix

        File network.py has 476 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        import os
        from abc import abstractmethod
        from copy import deepcopy
        from typing import Dict, Optional, Tuple
        
        
        Severity: Minor
        Found in deepreg/model/network.py - About 7 hrs to fix

          File preprocess.py has 398 lines of code (exceeds 250 allowed). Consider refactoring.
          Open

          """
          Module containing data augmentation techniques.
            - 3D Affine/DDF Transforms for moving and fixed images.
          """
          
          
          Severity: Minor
          Found in deepreg/dataset/preprocess.py - About 5 hrs to fix

            File layer_util.py has 361 lines of code (exceeds 250 allowed). Consider refactoring.
            Open

            """
            Module containing utilities for layer inputs
            """
            import itertools
            from typing import List, Tuple, Union
            Severity: Minor
            Found in deepreg/model/layer_util.py - About 4 hrs to fix

              Function get_intra_sample_indices has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
              Open

                  def get_intra_sample_indices(self) -> list:
                      """
                      Calculate the sample indices for intra-group sampling
                      The index to identify a sample is (group1, image1, group2, image2), means
                      - image1 of group1 is moving image
              Severity: Minor
              Found in deepreg/dataset/loader/grouped_loader.py - About 3 hrs to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              File predict.py has 326 lines of code (exceeds 250 allowed). Consider refactoring.
              Open

              # coding=utf-8
              
              """
              Module to perform predictions on data using
              command line interface.
              Severity: Minor
              Found in deepreg/predict.py - About 3 hrs to fix

                Function predict_on_dataset has a Cognitive Complexity of 26 (exceeds 5 allowed). Consider refactoring.
                Open

                def predict_on_dataset(
                    dataset: tf.data.Dataset,
                    fixed_grid_ref: tf.Tensor,
                    model: tf.keras.Model,
                    save_dir: str,
                Severity: Minor
                Found in deepreg/predict.py - About 3 hrs to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                File vis.py has 320 lines of code (exceeds 250 allowed). Consider refactoring.
                Open

                """
                Module to generate visualisations of data
                at command line interface.
                Requires ffmpeg writer to write gif files
                """
                Severity: Minor
                Found in deepreg/vis.py - About 3 hrs to fix

                  File label.py has 305 lines of code (exceeds 250 allowed). Consider refactoring.
                  Open

                  """Provide different loss or metrics classes for labels."""
                  
                  import tensorflow as tf
                  
                  from deepreg.constant import EPS
                  Severity: Minor
                  Found in deepreg/loss/label.py - About 3 hrs to fix

                    Function sample_index_generator has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring.
                    Open

                        def sample_index_generator(self):
                            """
                            Yield (moving_index, fixed_index, image_indices) sequentially, where
                    
                              - moving_index = (group1, image1)
                    Severity: Minor
                    Found in deepreg/dataset/loader/grouped_loader.py - About 2 hrs to fix

                    Cognitive Complexity

                    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                    A method's cognitive complexity is based on a few simple rules:

                    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                    • Code is considered more complex for each "break in the linear flow of the code"
                    • Code is considered more complex when "flow breaking structures are nested"

                    Further reading

                    File registry.py has 256 lines of code (exceeds 250 allowed). Consider refactoring.
                    Open

                    from copy import deepcopy
                    from typing import Any, Callable, Dict, Optional
                    
                    BACKBONE_CLASS = "backbone_class"
                    LOSS_CLASS = "loss_class"
                    Severity: Minor
                    Found in deepreg/registry.py - About 2 hrs to fix

                      Function __init__ has 17 arguments (exceeds 4 allowed). Consider refactoring.
                      Open

                          def __init__(
                      Severity: Major
                      Found in deepreg/model/backbone/u_net.py - About 2 hrs to fix

                        File image.py has 254 lines of code (exceeds 250 allowed). Consider refactoring.
                        Open

                        """Provide different loss or metrics classes for images."""
                        import tensorflow as tf
                        
                        from deepreg.constant import EPS
                        from deepreg.loss.kernel import gaussian_kernel1d_size as gaussian_kernel1d
                        Severity: Minor
                        Found in deepreg/loss/image.py - About 2 hrs to fix

                          Function gif_tile_slices has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
                          Open

                          def gif_tile_slices(img_paths, save_path=None, size=(2, 2), fname=None, interval=50):
                              """
                              Creates tiled gif over slices of multiple images.
                          
                              :param img_paths: list or comma separated string of image paths
                          Severity: Minor
                          Found in deepreg/vis.py - About 1 hr to fix

                          Cognitive Complexity

                          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                          A method's cognitive complexity is based on a few simple rules:

                          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                          • Code is considered more complex for each "break in the linear flow of the code"
                          • Code is considered more complex when "flow breaking structures are nested"

                          Further reading

                          Function move_test_cases_into_correct_path has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
                          Open

                              def move_test_cases_into_correct_path(test_cases, path_to_train, path_to_test):
                                  folder_names = os.listdir(path_to_train)
                                  os.chdir(path_to_train)
                                  for case in test_cases:
                                      for folder in folder_names:
                          Severity: Minor
                          Found in demos/unpaired_ct_lung/demo_data.py - About 1 hr to fix

                          Cognitive Complexity

                          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                          A method's cognitive complexity is based on a few simple rules:

                          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                          • Code is considered more complex for each "break in the linear flow of the code"
                          • Code is considered more complex when "flow breaking structures are nested"

                          Further reading

                          Function move_test_cases_into_correct_path has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
                          Open

                              def move_test_cases_into_correct_path(test_cases, path_to_train, path_to_test):
                                  folder_names = os.listdir(path_to_train)
                                  os.chdir(path_to_train)
                                  for case in test_cases:
                                      for folder in folder_names:
                          Severity: Minor
                          Found in demos/paired_ct_lung/demo_data.py - About 1 hr to fix

                          Cognitive Complexity

                          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                          A method's cognitive complexity is based on a few simple rules:

                          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                          • Code is considered more complex for each "break in the linear flow of the code"
                          • Code is considered more complex when "flow breaking structures are nested"

                          Further reading

                          Function build_layers has 13 arguments (exceeds 4 allowed). Consider refactoring.
                          Open

                              def build_layers(
                          Severity: Major
                          Found in deepreg/model/backbone/u_net.py - About 1 hr to fix

                            Function get_inter_sample_indices has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
                            Open

                                def get_inter_sample_indices(self) -> list:
                                    """
                                    Calculate the sample indices for inter-group sampling
                                    The index to identify a sample is (group1, image1, group2, image2), means
                            
                            
                            Severity: Minor
                            Found in deepreg/dataset/loader/grouped_loader.py - About 1 hr to fix

                            Cognitive Complexity

                            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                            A method's cognitive complexity is based on a few simple rules:

                            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                            • Code is considered more complex for each "break in the linear flow of the code"
                            • Code is considered more complex when "flow breaking structures are nested"

                            Further reading

                            Severity
                            Category
                            Status
                            Source
                            Language