neurodata/mgcpy

View on GitHub

Showing 63 of 192 total issues

Function plot_all_curves has a Cognitive Complexity of 56 (exceeds 5 allowed). Consider refactoring.
Open

def plot_all_curves(which_type):
    simulation_names = ['linear', 'exponential', 'cubic', 'joint_normal', 'step', 'quadratic', 'w_shape', 'spiral',
                        'bernoulli', 'log', 'fourth_root', 'sine_4pi',
                        'sine_16pi', 'square', 'two_parabolas', 'circle', 'ellipse', 'diamond', 'multi_noise',
                        'multi_indept']
Severity: Minor
Found in demos/figure_2_power_curve.py - About 1 day to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function plot_diff_curves has a Cognitive Complexity of 52 (exceeds 5 allowed). Consider refactoring.
Open

def plot_diff_curves(which_type):
    simulation_names = ['linear', 'exponential', 'cubic', 'joint_normal', 'step', 'quadratic', 'w_shape', 'spiral',
                        'bernoulli', 'log', 'fourth_root', 'sine_4pi',
                        'sine_16pi', 'square', 'two_parabolas', 'circle', 'ellipse', 'diamond', 'multi_noise',
                        'multi_indept']
Severity: Minor
Found in demos/figure_2_power_curve.py - About 1 day to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

File simulations.py has 446 lines of code (exceeds 250 allowed). Consider refactoring.
Open

import numpy as np


def gen_coeffs(num_dim):
    """
Severity: Minor
Found in mgcpy/benchmarks/simulations.py - About 6 hrs to fix

    File figure_2_power_curve.py has 324 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    # %% Change working directory from the workspace root to the ipynb file location. Turn this addition off with the DataScience.changeDirOnImportExport setting
    # ms-python.python added
    from mgcpy.independence_tests.mdmr import MDMR
    from mgcpy.independence_tests.kendall_spearman import KendallSpearman
    from mgcpy.independence_tests.hhg import HHG
    Severity: Minor
    Found in demos/figure_2_power_curve.py - About 3 hrs to fix

      Function test_simulations has 70 lines of code (exceeds 25 allowed). Consider refactoring.
      Open

      def test_simulations():
          num_samps = 1000
          num_dim1 = 1
          num_dim2 = 300
          independent = True
      Severity: Major
      Found in mgcpy/benchmarks/unit_tests/simulations_test.py - About 2 hrs to fix

        Function plot_all_curves has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
        Open

        def plot_all_curves(base_path):
            fig, ax = plt.subplots(nrows=4, ncols=5, figsize=(28, 24), sharex=True, sharey=True)
            simulation_type = 0
            for i, row in enumerate(ax):
                for j, col in enumerate(row):

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function test_statistic has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
        Open

            def test_statistic(self, matrix_X, matrix_Y, is_fast=False, fast_dcorr_data={}):
                """
                Computes the distance correlation between two datasets.
        
                :param matrix_X: is interpreted as either:
        Severity: Minor
        Found in mgcpy/independence_tests/dcorr.py - About 1 hr to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function test_statistic has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
        Open

            def test_statistic(self, matrix_X, matrix_Y, permutations=0, individual=0, disttype='cityblock'):
                """
                Computes MDMR Pseudo-F statistic between two datasets.
        
                - It first takes the distance matrix of Y (by )
        Severity: Minor
        Found in mgcpy/independence_tests/mdmr.py - About 1 hr to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function test_dcorrx has 33 lines of code (exceeds 25 allowed). Consider refactoring.
        Open

        def test_dcorrx():
            # test the special case when one of the dataset has zero variance
            X = np.array([1, 1, 1, 1])
            Y = np.array([1, 2, 3, 4])
            unbiased = DCorrX(which_test='unbiased', max_lag = 0)
        Severity: Minor
        Found in mgcpy/independence_tests/unit_tests/dcorrx/dcorrx_test.py - About 1 hr to fix

          Function test_statistic has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
          Open

              def test_statistic(self, matrix_X=None, matrix_Y=None):
                  """
                  Computes the Pearson/RV/CCa correlation measure between two datasets.
          
                  - Default computes linear correlation for RV
          Severity: Minor
          Found in mgcpy/independence_tests/rv_corr.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function power has 9 arguments (exceeds 4 allowed). Consider refactoring.
          Open

          def power(independence_test, sample_generator, num_samples=100, num_dimensions=1, theta=0, noise=0.0, repeats=1000, alpha=.05, simulation_type=''):
          Severity: Major
          Found in mgcpy/benchmarks/power_two_sample.py - About 1 hr to fix

            Function test_statistic has 28 lines of code (exceeds 25 allowed). Consider refactoring.
            Open

                def test_statistic(self, matrix_X, matrix_Y, permutations=0, individual=0, disttype='cityblock'):
                    """
                    Computes MDMR Pseudo-F statistic between two datasets.
            
                    - It first takes the distance matrix of Y (by )
            Severity: Minor
            Found in mgcpy/independence_tests/mdmr.py - About 1 hr to fix

              Function power has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
              Open

              def power(independence_test, sample_generator, num_samples=100, num_dimensions=1, noise=0.0, repeats=1000, alpha=.05, simulation_type=''):
                  '''
                  Estimate the power of an independence test given a simulator to sample from
              
                  :param independence_test: an object whose class inherits from the ``Independence_Test`` abstract class
              Severity: Minor
              Found in mgcpy/benchmarks/power.py - About 1 hr to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function test_statistic has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
              Open

                  def test_statistic(self, matrix_X, matrix_Y):
                      """
                      Computes the HHG correlation measure between two datasets.
              
                      :param matrix_X: a [n*p] data matrix, a matrix with n samples in p dimensions
              Severity: Minor
              Found in mgcpy/independence_tests/hhg.py - About 1 hr to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function p_value has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
              Open

                  def p_value(self, matrix_X, matrix_Y, replication_factor=1000):
                      """
                      Tests independence between two datasets using the independence test and permutation test.
              
                      :param matrix_X: a ``[n*p]`` matrix, a matrix with n samples in ``p`` dimensions
              Severity: Minor
              Found in mgcpy/independence_tests/abstract_class.py - About 1 hr to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function test_dcorr_stat has 26 lines of code (exceeds 25 allowed). Consider refactoring.
              Open

              def test_dcorr_stat():
                  # test the special case when one of the dataset has zero variance
                  X = np.array([1, 1, 1])[:, np.newaxis]
                  Y = np.array([1, 2, 3])[:, np.newaxis]
                  unbiased = DCorr(which_test='unbiased')
              Severity: Minor
              Found in mgcpy/independence_tests/unit_tests/dcorr/dcorr_test.py - About 1 hr to fix

                Function power_given_data has 8 arguments (exceeds 4 allowed). Consider refactoring.
                Open

                def power_given_data(independence_test, simulation_type, data_type='dimension', num_samples=100, num_dimensions=1, repeats=1000, alpha=.05, additional_params={}):
                Severity: Major
                Found in mgcpy/benchmarks/power.py - About 1 hr to fix

                  Function power has 8 arguments (exceeds 4 allowed). Consider refactoring.
                  Open

                  def power(independence_test, sample_generator, num_samples=100, num_dimensions=1, noise=0.0, repeats=1000, alpha=.05, simulation_type=''):
                  Severity: Major
                  Found in mgcpy/benchmarks/power.py - About 1 hr to fix

                    Function power_given_data has 8 arguments (exceeds 4 allowed). Consider refactoring.
                    Open

                    def power_given_data(base_path, independence_test, simulation_type, num_samples, repeats=1000, alpha=.05, additional_params={}, is_rf=False):
                    Severity: Major
                    Found in mgcpy/benchmarks/hypothesis_tests/two_sample_test/power.py - About 1 hr to fix

                      Function cub_sim has 8 arguments (exceeds 4 allowed). Consider refactoring.
                      Open

                      def cub_sim(num_samp, num_dim, noise=15, indep=False, low=-1, high=1,
                      Severity: Major
                      Found in mgcpy/benchmarks/simulations.py - About 1 hr to fix
                        Severity
                        Category
                        Status
                        Source
                        Language