neuropsychology/NeuroKit.py

View on GitHub

Showing 176 of 176 total issues

Similar blocks of code found in 2 locations. Consider refactoring.
Open

        if sex == "m":
            if age <= 49:
                hrv_adjusted["meanNN_Adjusted"] = (hrv["meanNN"]-930)/133
                hrv_adjusted["sdNN_Adjusted"] = (hrv["sdNN"]-45.8)/18.8
                hrv_adjusted["RMSSD_Adjusted"] = (hrv["RMSSD"]-34.0)/18.3
Severity: Major
Found in neurokit/bio/bio_ecg.py and 1 other location - About 3 days to fix
neurokit/bio/bio_ecg.py on lines 757..773

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 369.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

        if sex == "f":
            if age <= 49:
                hrv_adjusted["meanNN_Adjusted"] = (hrv["meanNN"]-901)/117
                hrv_adjusted["sdNN_Adjusted"] = (hrv["sdNN"]-44.9)/19.2
                hrv_adjusted["RMSSD_Adjusted"] = (hrv["RMSSD"]-36.5)/20.1
Severity: Major
Found in neurokit/bio/bio_ecg.py and 1 other location - About 3 days to fix
neurokit/bio/bio_ecg.py on lines 740..756

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 369.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

File bio_ecg.py has 680 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# -*- coding: utf-8 -*-
"""
Subsubmodule for ecg processing.
"""
import numpy as np
Severity: Major
Found in neurokit/bio/bio_ecg.py - About 1 day to fix

    Function ecg_EventRelated has a Cognitive Complexity of 66 (exceeds 5 allowed). Consider refactoring.
    Open

    def ecg_EventRelated(epoch, event_length=1, window_post=0, features=["Heart_Rate", "Cardiac_Phase", "RR_Interval", "RSA", "HRV"]):
        """
        Extract event-related ECG changes.
    
        Parameters
    Severity: Minor
    Found in neurokit/bio/bio_ecg.py - About 1 day to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    File complexity.py has 524 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    # -*- coding: utf-8 -*-
    import nolds
    import numpy as np
    
    # ==============================================================================
    Severity: Major
    Found in neurokit/signal/complexity.py - About 1 day to fix

      Function compute_BMI has a Cognitive Complexity of 42 (exceeds 5 allowed). Consider refactoring.
      Open

      def compute_BMI(height, weight, age, sex):
          """
          Returns the traditional BMI, the 'new' Body Mass Index and estimates the Body Fat Percentage (BFP; Deurenberg et al., 1991).
      
          Parameters
      Severity: Minor
      Found in neurokit/statistics/routines.py - About 6 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function complexity has a Cognitive Complexity of 42 (exceeds 5 allowed). Consider refactoring.
      Open

      def complexity(signal, sampling_rate=1000, shannon=True, sampen=True, multiscale=True, spectral=True, svd=True, correlation=True, higushi=True, petrosian=True, fisher=True, hurst=True, dfa=True, lyap_r=False, lyap_e=False, emb_dim=2, tolerance="default", k_max=8, bands=None, tau=1):
          """
          Computes several chaos/complexity indices of a signal (including entropy, fractal dimensions, Hurst and Lyapunov exponent etc.).
      
          Parameters
      Severity: Minor
      Found in neurokit/signal/complexity.py - About 6 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function eeg_complexity has a Cognitive Complexity of 40 (exceeds 5 allowed). Consider refactoring.
      Open

      def eeg_complexity(eeg, sampling_rate, times=None, index=None, include="all", exclude=None, hemisphere="both", central=True, verbose=True, shannon=True, sampen=True, multiscale=True, spectral=True, svd=True, correlation=True, higushi=True, petrosian=True, fisher=True, hurst=True, dfa=True, lyap_r=False, lyap_e=False, names="Complexity"):
          """
          Compute complexity indices of epochs or raw object.
      
          DOCS INCOMPLETE :(
      Severity: Minor
      Found in neurokit/eeg/eeg_complexity.py - About 6 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function read_acqknowledge has a Cognitive Complexity of 37 (exceeds 5 allowed). Consider refactoring.
      Open

      def read_acqknowledge(filename, path="", index="datetime", sampling_rate="max", resampling_method="pad", fill_interruptions=True, return_sampling_rate=True):
          """
          Read and Format a BIOPAC's AcqKnowledge file into a pandas' dataframe.
      
          Parameters
      Severity: Minor
      Found in neurokit/bio/bio_data.py - About 5 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      File bio_eda.py has 392 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      # -*- coding: utf-8 -*-
      from __future__ import division
      import pandas as pd
      import numpy as np
      import biosppy
      Severity: Minor
      Found in neurokit/bio/bio_eda.py - About 5 hrs to fix

        Function ecg_hrv has a Cognitive Complexity of 32 (exceeds 5 allowed). Consider refactoring.
        Open

        def ecg_hrv(rpeaks=None, rri=None, sampling_rate=1000, hrv_features=["time", "frequency", "nonlinear"]):
            """
            Computes the Heart-Rate Variability (HRV). Shamelessly stolen from the `hrv <https://github.com/rhenanbartels/hrv/blob/develop/hrv>`_ package by Rhenan Bartels. All credits go to him.
        
            Parameters
        Severity: Minor
        Found in neurokit/bio/bio_ecg.py - About 4 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        File routines.py has 355 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        # -*- coding: utf-8 -*-
        from __future__ import division
        from .statistics import normal_range
        from .statistics import find_following_duplicates
        from .statistics import find_closest_in_list
        Severity: Minor
        Found in neurokit/statistics/routines.py - About 4 hrs to fix

          Similar blocks of code found in 3 locations. Consider refactoring.
          Open

              if resampling_method == "mean":
                  if len(data_else.keys()) > 0:
                      df2 = df2.resample(resampling_factor).mean()
                  if int(sampling_rate) != int(max(freq_list)):
                      df = df.resample(resampling_factor).mean()
          Severity: Major
          Found in neurokit/bio/bio_data.py and 2 other locations - About 4 hrs to fix
          neurokit/bio/bio_data.py on lines 153..157
          neurokit/bio/bio_data.py on lines 158..162

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 80.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 3 locations. Consider refactoring.
          Open

              if resampling_method == "pad":
                  if len(data_else.keys()) > 0:
                      df2 = df2.resample(resampling_factor).pad()
                  if int(sampling_rate) != int(max(freq_list)):
                      df = df.resample(resampling_factor).pad()
          Severity: Major
          Found in neurokit/bio/bio_data.py and 2 other locations - About 4 hrs to fix
          neurokit/bio/bio_data.py on lines 148..152
          neurokit/bio/bio_data.py on lines 153..157

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 80.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 3 locations. Consider refactoring.
          Open

              if resampling_method == "bfill":
                  if len(data_else.keys()) > 0:
                      df2 = df2.resample(resampling_factor).bfill()
                  if int(sampling_rate) != int(max(freq_list)):
                      df = df.resample(resampling_factor).bfill()
          Severity: Major
          Found in neurokit/bio/bio_data.py and 2 other locations - About 4 hrs to fix
          neurokit/bio/bio_data.py on lines 148..152
          neurokit/bio/bio_data.py on lines 158..162

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 80.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          File bio_ecg_preprocessing.py has 345 lines of code (exceeds 250 allowed). Consider refactoring.
          Open

          # -*- coding: utf-8 -*-
          """
          Subsubmodule for ecg processing.
          """
          import numpy as np
          Severity: Minor
          Found in neurokit/bio/bio_ecg_preprocessing.py - About 4 hrs to fix

            File eeg_microstates.py has 342 lines of code (exceeds 250 allowed). Consider refactoring.
            Open

            """
            Microstates submodule.
            """
            from ..signal import complexity
            from ..miscellaneous import find_following_duplicates
            Severity: Minor
            Found in examples/UnderDev/eeg/eeg_microstates.py - About 4 hrs to fix

              Function eeg_select_electrodes has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
              Open

              def eeg_select_electrodes(eeg, include="all", exclude=None, hemisphere="both", central=True):
                  """
                  Returns electrodes/sensors names of selected region (according to a 10-20 EEG montage).
              
                  Parameters
              Severity: Minor
              Found in neurokit/eeg/eeg_data.py - About 3 hrs to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                  M = cv.spmatrix(np.tile(ma, (n-2,1)), np.c_[i,i,i], np.c_[i,i-1,i-2], (n,n))
              Severity: Major
              Found in neurokit/bio/bio_eda.py and 1 other location - About 3 hrs to fix
              neurokit/bio/bio_eda.py on lines 259..259

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 69.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                  A = cv.spmatrix(np.tile(ar, (n-2,1)), np.c_[i,i,i], np.c_[i,i-1,i-2], (n,n))
              Severity: Major
              Found in neurokit/bio/bio_eda.py and 1 other location - About 3 hrs to fix
              neurokit/bio/bio_eda.py on lines 260..260

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 69.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Severity
              Category
              Status
              Source
              Language