WenjieDu/TSDB

View on GitHub

Showing 12 of 12 total issues

Similar blocks of code found in 2 locations. Consider refactoring.
Open

Severity: Major
Found in tsdb/loading_funcs/solar_alabama.py and 1 other location - About 1 day to fix
tsdb/loading_funcs/pems_traffic.py on lines 0..48

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 135.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

Severity: Major
Found in tsdb/loading_funcs/pems_traffic.py and 1 other location - About 1 day to fix
tsdb/loading_funcs/solar_alabama.py on lines 0..50

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 135.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function _load_arff_uea has a Cognitive Complexity of 32 (exceeds 5 allowed). Consider refactoring.
Open

def _load_arff_uea(
    full_file_path_and_name,
    replace_missing_vals_with="NaN",
):
    """Load data from a classification/regression WEKA arff file to a 3D np array.
Severity: Minor
Found in tsdb/loading_funcs/ucr_uea_datasets.py - About 4 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

File ucr_uea_datasets.py has 291 lines of code (exceeds 250 allowed). Consider refactoring.
Open

"""
Scripts related to UCR&UAE datasets http://timeseriesclassification.com/index.php

Most of code comes from library tslearn https://github.com/tslearn-team/tslearn.

Severity: Minor
Found in tsdb/loading_funcs/ucr_uea_datasets.py - About 3 hrs to fix

    Function load_ais has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
    Open

    def load_ais(local_path):
        """Load dataset AIS data, which is a time-series imputation and classification dataset.
    
        Parameters
        ----------
    Severity: Minor
    Found in tsdb/loading_funcs/vessel_ais.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _download_and_extract has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
    Open

    def _download_and_extract(url: str, saving_path: str) -> Optional[str]:
        """Download dataset from the given url and extract to the given saving path.
    
        Parameters
        ----------
    Severity: Minor
    Found in tsdb/utils/downloading.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function determine_data_home has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

    def determine_data_home():
        # default path
        default_path = check_path("~/.pypots/tsdb")
    
        # read data_home from the config file
    Severity: Minor
    Found in tsdb/utils/file.py - About 55 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Avoid deeply nested control flow statements.
    Open

                            for c in range(len(channels)):
                                split = channels[c].split(",")
                                inst[c] = np.array([float(i) for i in split])
                        else:
    Severity: Major
    Found in tsdb/loading_funcs/ucr_uea_datasets.py - About 45 mins to fix

      Function to_time_series_dataset has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
      Open

      def to_time_series_dataset(dataset, dtype=float):
          """Transforms a time series dataset so that it fits the format used in
          ``tslearn`` models.
      
          Parameters
      Severity: Minor
      Found in tsdb/loading_funcs/ucr_uea_datasets.py - About 45 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Avoid deeply nested control flow statements.
      Open

                          if is_multi_variate:
                              line, class_val = line.split("',")
                              class_val_list.append(class_val.strip())
                              channels = line.split("\\n")
                              channels[0] = channels[0].replace("'", "")
      Severity: Major
      Found in tsdb/loading_funcs/ucr_uea_datasets.py - About 45 mins to fix

        Avoid deeply nested control flow statements.
        Open

                                if is_first_case:
                                    is_first_case = False
                                    n_timepoints = len(line_parts) - 1
                                class_val_list.append(line_parts[-1].strip())
        Severity: Major
        Found in tsdb/loading_funcs/ucr_uea_datasets.py - About 45 mins to fix

          Function load_physionet2012 has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
          Open

          def load_physionet2012(local_path):
              """Load dataset PhysioNet Challenge 2012, which is a time-series classification dataset.
          
              Parameters
              ----------
          Severity: Minor
          Found in tsdb/loading_funcs/physionet_2012.py - About 35 mins to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Severity
          Category
          Status
          Source
          Language