USDA-ARS-NWRC/awsm

View on GitHub

Showing 17 of 24 total issues

File ingest_data.py has 530 lines of code (exceeds 300 allowed). Consider refactoring.
Open

import copy
import glob
import os
from collections import OrderedDict
from datetime import datetime
Severity: Major
Found in awsm/interface/ingest_data.py - About 7 hrs to fix

    Function hedrick_updating_procedure has 115 lines of code (exceeds 30 allowed). Consider refactoring.
    Open

        def hedrick_updating_procedure(self, m_s, T_s_0, T_s_l, T_s, h2o_sat,
                                       density, z_s, x, y, update_info):
            """
            This function performs the direct insertion procedure and returns the
            updated fields.
    Severity: Major
    Found in awsm/interface/ingest_data.py - About 4 hrs to fix

      File ipysnobal.py has 390 lines of code (exceeds 300 allowed). Consider refactoring.
      Open

      from pysnobal.c_snobal import snobal
      from datetime import datetime, timedelta
      import logging
      import numpy as np
      from smrf.framework.model_framework import SMRF
      Severity: Minor
      Found in awsm/models/pysnobal/ipysnobal.py - About 4 hrs to fix

        File framework.py has 365 lines of code (exceeds 300 allowed). Consider refactoring.
        Open

        import copy
        import logging
        import os
        import sys
        from datetime import datetime
        Severity: Minor
        Found in awsm/framework/framework.py - About 3 hrs to fix

          Function hedrick_updating_procedure has a Cognitive Complexity of 20 (exceeds 8 allowed). Consider refactoring.
          Open

              def hedrick_updating_procedure(self, m_s, T_s_0, T_s_l, T_s, h2o_sat,
                                             density, z_s, x, y, update_info):
                  """
                  This function performs the direct insertion procedure and returns the
                  updated fields.
          Severity: Minor
          Found in awsm/interface/ingest_data.py - About 2 hrs to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function calc_offsets_nsteps has a Cognitive Complexity of 18 (exceeds 8 allowed). Consider refactoring.
          Open

              def calc_offsets_nsteps(self, myawsm, update_info):
                  """
                  Function to calculate the offset for each update run and the number of
                  steps the iSnobal run needs to run
          
          
          Severity: Minor
          Found in awsm/interface/ingest_data.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function run_awsm_daily_ops has a Cognitive Complexity of 17 (exceeds 8 allowed). Consider refactoring.
          Open

          def run_awsm_daily_ops(config_file):
              """
              Run each day seperately. Calls run_awsm
              """
              # define some formats
          Severity: Minor
          Found in awsm/framework/framework.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

              prev_out_base = os.path.join(paths['path_dr'],
                                           paths['basin'],
                                           'wy{}'.format(wy),
                                           paths['project_name'],
                                           'runs')
          Severity: Major
          Found in awsm/framework/framework.py and 1 other location - About 1 hr to fix
          awsm/framework/framework.py on lines 444..448

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 41.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

              prev_data_base = os.path.join(paths['path_dr'],
                                            paths['basin'],
                                            'wy{}'.format(wy),
                                            paths['project_name'],
                                            'data')
          Severity: Major
          Found in awsm/framework/framework.py and 1 other location - About 1 hr to fix
          awsm/framework/framework.py on lines 438..442

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 41.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Function do_update_pysnobal has 31 lines of code (exceeds 30 allowed). Consider refactoring.
          Open

              def do_update_pysnobal(self, output_rec, dt):
                  """
                  Function to update a time step of a pysnobal run by updating the
                  output_rec
          
          
          Severity: Minor
          Found in awsm/interface/ingest_data.py - About 1 hr to fix

            Function hedrick_updating_procedure has 10 arguments (exceeds 6 allowed). Consider refactoring.
            Open

                def hedrick_updating_procedure(self, m_s, T_s_0, T_s_l, T_s, h2o_sat,
            Severity: Major
            Found in awsm/interface/ingest_data.py - About 1 hr to fix

              Similar blocks of code found in 5 locations. Consider refactoring.
              Open

                      h2o_buf = np.concatenate((tmp1, np.concatenate(
                          (tmp2, h2o_sat, tmp2), axis=0), tmp1), axis=1)
              Severity: Major
              Found in awsm/interface/ingest_data.py and 4 other locations - About 35 mins to fix
              awsm/interface/ingest_data.py on lines 494..495
              awsm/interface/ingest_data.py on lines 496..497
              awsm/interface/ingest_data.py on lines 498..499
              awsm/interface/ingest_data.py on lines 500..501

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 33.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 5 locations. Consider refactoring.
              Open

                      T_s_l_buf = np.concatenate((tmp1, np.concatenate(
                          (tmp2, T_s_l, tmp2), axis=0), tmp1), axis=1)
              Severity: Major
              Found in awsm/interface/ingest_data.py and 4 other locations - About 35 mins to fix
              awsm/interface/ingest_data.py on lines 494..495
              awsm/interface/ingest_data.py on lines 496..497
              awsm/interface/ingest_data.py on lines 500..501
              awsm/interface/ingest_data.py on lines 502..503

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 33.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 5 locations. Consider refactoring.
              Open

                      T_s_0_buf = np.concatenate((tmp1, np.concatenate(
                          (tmp2, T_s_0, tmp2), axis=0), tmp1), axis=1)
              Severity: Major
              Found in awsm/interface/ingest_data.py and 4 other locations - About 35 mins to fix
              awsm/interface/ingest_data.py on lines 494..495
              awsm/interface/ingest_data.py on lines 498..499
              awsm/interface/ingest_data.py on lines 500..501
              awsm/interface/ingest_data.py on lines 502..503

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 33.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 5 locations. Consider refactoring.
              Open

                      T_s_buf = np.concatenate((tmp1, np.concatenate(
                          (tmp2, T_s, tmp2), axis=0), tmp1), axis=1)
              Severity: Major
              Found in awsm/interface/ingest_data.py and 4 other locations - About 35 mins to fix
              awsm/interface/ingest_data.py on lines 494..495
              awsm/interface/ingest_data.py on lines 496..497
              awsm/interface/ingest_data.py on lines 498..499
              awsm/interface/ingest_data.py on lines 502..503

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 33.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 5 locations. Consider refactoring.
              Open

                      rho_buf = np.concatenate((tmp1, np.concatenate(
                          (tmp2, rho, tmp2), axis=0), tmp1), axis=1)
              Severity: Major
              Found in awsm/interface/ingest_data.py and 4 other locations - About 35 mins to fix
              awsm/interface/ingest_data.py on lines 496..497
              awsm/interface/ingest_data.py on lines 498..499
              awsm/interface/ingest_data.py on lines 500..501
              awsm/interface/ingest_data.py on lines 502..503

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 33.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Function run_awsm has a Cognitive Complexity of 9 (exceeds 8 allowed). Consider refactoring.
              Open

              def run_awsm(config):
                  """
                  Function that runs awsm how it should be operate for full runs.
              
                  Args:
              Severity: Minor
              Found in awsm/framework/framework.py - About 25 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Severity
              Category
              Status
              Source
              Language