neuropsychology/NeuroKit.py

View on GitHub

Showing 176 of 176 total issues

Function eeg_select_electrodes has 5 arguments (exceeds 4 allowed). Consider refactoring.
Open

def eeg_select_electrodes(eeg, include="all", exclude=None, hemisphere="both", central=True):
Severity: Minor
Found in neurokit/eeg/eeg_data.py - About 35 mins to fix

    Function emg_find_activation has 5 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def emg_find_activation(envelope, sampling_rate=1000, threshold=0, n_above=0.25, n_below=1):
    Severity: Minor
    Found in neurokit/bio/bio_emg.py - About 35 mins to fix

      Function eeg_gfp has 5 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def eeg_gfp(raws, gflp_method="GFPL1", scale=True, normalize=True, smoothing=None):
      Severity: Minor
      Found in examples/UnderDev/eeg/eeg_microstates.py - About 35 mins to fix

        Function segmenter_pekkanen has 5 arguments (exceeds 4 allowed). Consider refactoring.
        Open

        def segmenter_pekkanen(ecg, sampling_rate, window_size=5.0, lfreq=5.0, hfreq=15.0):
        Severity: Minor
        Found in neurokit/bio/bio_ecg_preprocessing.py - About 35 mins to fix

          Similar blocks of code found in 3 locations. Consider refactoring.
          Open

              if "Heart_Rate" in features:
                  if "Heart_Rate" in epoch.columns:
                      ECG_Response = compute_features("Heart_Rate", "ECG_Heart_Rate", ECG_Response)
          Severity: Minor
          Found in neurokit/bio/bio_ecg.py and 2 other locations - About 35 mins to fix
          neurokit/bio/bio_ecg.py on lines 902..904
          neurokit/bio/bio_ecg.py on lines 909..911

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 33.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 3 locations. Consider refactoring.
          Open

              if "RR_Interval" in features:
                  if "ECG_RR_Interval" in epoch.columns:
                      ECG_Response = compute_features("ECG_RR_Interval", "ECG_RRi", ECG_Response)
          Severity: Minor
          Found in neurokit/bio/bio_ecg.py and 2 other locations - About 35 mins to fix
          neurokit/bio/bio_ecg.py on lines 874..876
          neurokit/bio/bio_ecg.py on lines 909..911

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 33.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 3 locations. Consider refactoring.
          Open

              if "RSA" in features:
                  if "RSA" in epoch.columns:
                      ECG_Response = compute_features("RSA", "ECG_RSA", ECG_Response)
          Severity: Minor
          Found in neurokit/bio/bio_ecg.py and 2 other locations - About 35 mins to fix
          neurokit/bio/bio_ecg.py on lines 874..876
          neurokit/bio/bio_ecg.py on lines 902..904

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 33.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Function localize_events has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
          Open

          def localize_events(events_channel, treshold="auto", cut="higher", time_index=None):
              """
              Find the onsets of all events based on a continuous signal.
          
              Parameters
          Severity: Minor
          Found in neurokit/signal/events.py - About 35 mins to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function complexity_entropy_multiscale has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
          Open

          def complexity_entropy_multiscale(signal, max_scale_factor=20, m=2, r="default"):
              """
              Computes the Multiscale Entropy. Uses sample entropy with 'chebychev' distance.
          
              Parameters
          Severity: Minor
          Found in neurokit/signal/complexity.py - About 35 mins to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

              try:
                  # Fit clustering aglorithm
                  algorithm = eeg_microstates_clustering(data=data_processed,
          Severity: Minor
          Found in examples/UnderDev/eeg/eeg_microstates.py and 1 other location - About 30 mins to fix
          neurokit/bio/bio_meta.py on lines 145..145

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 32.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                          bin_width = np.diff(np.histogram(RRi, bins=bin_number_current, density=True)[1])[0]
          Severity: Minor
          Found in neurokit/bio/bio_ecg.py and 1 other location - About 30 mins to fix
          neurokit/bio/bio_ecg.py on lines 593..593

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 32.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                          if abs(8 - bin_width) < abs(8 - np.diff(np.histogram(RRi, bins=bin_number, density=True)[1])[0]):
          Severity: Minor
          Found in neurokit/bio/bio_ecg.py and 1 other location - About 30 mins to fix
          neurokit/bio/bio_ecg.py on lines 592..592

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 32.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                  power_per_band = [np.sum(psd[np.bitwise_and(freqs >= low, freqs<up)])
          Severity: Minor
          Found in neurokit/signal/complexity.py and 1 other location - About 30 mins to fix
          neurokit/bio/bio_ecg.py on lines 235..236

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 32.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                  emg = emg_process(emg=emg, sampling_rate=sampling_rate, emg_names=emg_names, envelope_freqs=emg_envelope_freqs, envelope_lfreq=emg_envelope_lfreq, activation_treshold=emg_activation_treshold, activation_n_above=emg_activation_n_above, activation_n_below=emg_activation_n_below)
          Severity: Minor
          Found in neurokit/bio/bio_meta.py and 1 other location - About 30 mins to fix
          examples/UnderDev/eeg/eeg_microstates.py on lines 391..393

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 32.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                  cycles_rri.append(rpeaks[np.logical_and(rpeaks >= cycle_init,
                                                          rpeaks < cycle_end)])
          Severity: Minor
          Found in neurokit/bio/bio_ecg.py and 1 other location - About 30 mins to fix
          neurokit/signal/complexity.py on lines 597..597

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 32.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Function eeg_to_df has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
          Open

          def eeg_to_df(eeg, index=None, include="all", exclude=None, hemisphere="both", central=True):
              """
              Convert mne Raw or Epochs object to dataframe or dict of dataframes.
          
              DOCS INCOMPLETE :(
          Severity: Minor
          Found in neurokit/eeg/eeg_data.py - About 25 mins to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Severity
          Category
          Status
          Source
          Language