sylvchev/mdla

View on GitHub

Showing 119 of 119 total issues

Function benchmarking_plot has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
Open

def benchmarking_plot(figname, pst, plot_sep, minibatchRange, mprocessRange):
    _ = plt.figure(figsize=(15, 10))
    bar_width = 0.35
    _ = plt.bar(
        np.array([0]),
Severity: Minor
Found in examples/example_benchmark_performance.py - About 45 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function fit has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
Open

    def fit(self, X, y=None):
        """Fit the model from data in X.

        Parameters
        ----------
Severity: Minor
Found in mdla/mdla.py - About 45 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function plot_reconstruction_samples has 6 arguments (exceeds 4 allowed). Consider refactoring.
Open

def plot_reconstruction_samples(X, r, code, kernels, n, figname):
Severity: Minor
Found in examples/plot_bci_dict.py - About 45 mins to fix

    Avoid deeply nested control flow statements.
    Open

                        if decimation:
                            fs = decimate(fs, int(dfactor), axis=0, zero_phase=True)
    
                    # Event Type
                    trial_begin = 768
    Severity: Major
    Found in experiments/experiment_bci_competition.py - About 45 mins to fix

      Function _compute_gradient has 6 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def _compute_gradient(
      Severity: Minor
      Found in mdla/mdla.py - About 45 mins to fix

        Avoid deeply nested control flow statements.
        Open

                            for e in range(s.shape[1]):
                                ns[:, e] = filtfilt(bn, an, s[:, e])
                            # Apply a bandpass filter
                            fs = zeros_like(s)
        Severity: Major
        Found in experiments/experiment_bci_competition.py - About 45 mins to fix

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

          plot_univariate(
              array(learned_dict2.objective_error),
              array(learned_dict2.detect_rate),
              array(learned_dict2.wasserstein),
          Severity: Minor
          Found in examples/example_univariate.py and 1 other location - About 40 mins to fix
          examples/example_multivariate.py on lines 195..198

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 34.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

          plot_multivariate(
              array(learned_dict2.objective_error),
              array(learned_dict2.detect_rate),
              array(learned_dict2.wasserstein),
          Severity: Minor
          Found in examples/example_multivariate.py and 1 other location - About 40 mins to fix
          examples/example_univariate.py on lines 203..206

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 34.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Function plot_boxes has 5 arguments (exceeds 4 allowed). Consider refactoring.
          Open

          def plot_boxes(fig, data, color="blue", n_iter=100, label=""):
          Severity: Minor
          Found in experiments/experiment_multivariate_recovering.py - About 35 mins to fix

            Function multivariate_sparse_encode has 5 arguments (exceeds 4 allowed). Consider refactoring.
            Open

            def multivariate_sparse_encode(
            Severity: Minor
            Found in mdla/mdla.py - About 35 mins to fix

              Function plot_multivariate has 5 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def plot_multivariate(objective_error, detection_rate, wasserstein, n_iter, figname):
              Severity: Minor
              Found in examples/example_multivariate.py - About 35 mins to fix

                Function array3d has 5 arguments (exceeds 4 allowed). Consider refactoring.
                Open

                def array3d(X, dtype=None, order=None, copy=False, force_all_finite=True):
                Severity: Minor
                Found in mdla/mdla.py - About 35 mins to fix

                  Function _set_mdla_params has 5 arguments (exceeds 4 allowed). Consider refactoring.
                  Open

                      def _set_mdla_params(
                  Severity: Minor
                  Found in mdla/mdla.py - About 35 mins to fix

                    Function plot_univariate has 5 arguments (exceeds 4 allowed). Consider refactoring.
                    Open

                    def plot_univariate(objective_error, detect_rate, wasserstein, n_iter, figname):
                    Severity: Minor
                    Found in examples/example_univariate.py - About 35 mins to fix

                      Function plot_atom_usage has 5 arguments (exceeds 4 allowed). Consider refactoring.
                      Open

                      def plot_atom_usage(X, kernels, n_nonzero_coefs, n_jobs, figname):
                      Severity: Minor
                      Found in examples/plot_bci_dict.py - About 35 mins to fix

                        Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
                        Open

                            def __init__(
                        Severity: Minor
                        Found in mdla/mdla.py - About 35 mins to fix

                          Function benchmarking_plot has 5 arguments (exceeds 4 allowed). Consider refactoring.
                          Open

                          def benchmarking_plot(figname, pst, plot_sep, minibatchRange, mprocessRange):
                          Severity: Minor
                          Found in examples/example_benchmark_performance.py - About 35 mins to fix

                            Function _generate_testbed has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
                            Open

                            def _generate_testbed(
                                kernel_init_len,
                                n_nonzero_coefs,
                                n_kernels,
                                n_samples=10,
                            Severity: Minor
                            Found in examples/example_benchmark_performance.py - About 35 mins to fix

                            Cognitive Complexity

                            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                            A method's cognitive complexity is based on a few simple rules:

                            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                            • Code is considered more complex for each "break in the linear flow of the code"
                            • Code is considered more complex when "flow breaking structures are nested"

                            Further reading

                            Similar blocks of code found in 7 locations. Consider refactoring.
                            Open

                                    _ = methaus.plot(
                                        arange(1, n_iter + 1),
                                        medianhfs,
                            Severity: Major
                            Found in experiments/experiment_dictionary_recovering.py and 6 other locations - About 35 mins to fix
                            experiments/experiment_dictionary_recovering.py on lines 74..76
                            experiments/experiment_dictionary_recovering.py on lines 88..90
                            experiments/experiment_dictionary_recovering.py on lines 109..110
                            experiments/experiment_dictionary_recovering.py on lines 141..142
                            experiments/experiment_dictionary_recovering.py on lines 151..153
                            experiments/experiment_dictionary_recovering.py on lines 174..176

                            Duplicated Code

                            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                            Tuning

                            This issue has a mass of 33.

                            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                            Refactorings

                            Further Reading

                            Similar blocks of code found in 7 locations. Consider refactoring.
                            Open

                                    _ = methaus.plot(
                                        arange(1, n_iter + 1), medianhc, linewidth=1, label=r"$1-d_H^c$", color="cyan"
                            Severity: Major
                            Found in experiments/experiment_dictionary_recovering.py and 6 other locations - About 35 mins to fix
                            experiments/experiment_dictionary_recovering.py on lines 74..76
                            experiments/experiment_dictionary_recovering.py on lines 88..90
                            experiments/experiment_dictionary_recovering.py on lines 119..121
                            experiments/experiment_dictionary_recovering.py on lines 141..142
                            experiments/experiment_dictionary_recovering.py on lines 151..153
                            experiments/experiment_dictionary_recovering.py on lines 174..176

                            Duplicated Code

                            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                            Tuning

                            This issue has a mass of 33.

                            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                            Refactorings

                            Further Reading

                            Severity
                            Category
                            Status
                            Source
                            Language