KarrLab/obj_tables

View on GitHub

Showing 542 of 542 total issues

Function run has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
Open

    def run(self, path, schema_name=None, models=None,
            allow_multiple_sheets_per_model=False,
            ignore_missing_models=False, ignore_extra_models=False,
            ignore_sheet_order=False,
            include_all_attributes=True, ignore_missing_attributes=False, ignore_extra_attributes=False,
Severity: Minor
Found in obj_tables/io.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    def __init__(self, default=None, none_value=None, verbose_name='', description='',
                 primary=False, unique=False, unique_case_insensitive=False):
        """
        Args:
            default (:obj:`sympy.Expr`, optional): default value
Severity: Major
Found in obj_tables/math/symbolic.py and 1 other location - About 3 hrs to fix
obj_tables/math/symbolic.py on lines 212..224

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 63.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function run has 24 arguments (exceeds 4 allowed). Consider refactoring.
Open

    def run(self, path, objects, schema_name=None, doc_metadata=None, model_metadata=None,
Severity: Major
Found in obj_tables/io.py - About 3 hrs to fix

    Function run has 24 arguments (exceeds 4 allowed). Consider refactoring.
    Open

        def run(self, path, objects, schema_name=None, doc_metadata=None, model_metadata=None,
    Severity: Major
    Found in obj_tables/io.py - About 3 hrs to fix

      Function run has 24 arguments (exceeds 4 allowed). Consider refactoring.
      Open

          def run(self, path, objects, schema_name=None, doc_metadata=None, model_metadata=None, models=None,
      Severity: Major
      Found in obj_tables/io.py - About 3 hrs to fix

        Function run has 24 arguments (exceeds 4 allowed). Consider refactoring.
        Open

            def run(self, path, objects, schema_name=None, doc_metadata=None, model_metadata=None,
        Severity: Major
        Found in obj_tables/io.py - About 3 hrs to fix

          Function run has 24 arguments (exceeds 4 allowed). Consider refactoring.
          Open

              def run(self, path, objects, schema_name=None, doc_metadata=None, model_metadata=None,
          Severity: Major
          Found in obj_tables/io.py - About 3 hrs to fix

            Function deserialize has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
            Open

                def deserialize(self, values, objects, decoded=None):
                    """ Deserialize value
            
                    Args:
                        values (:obj:`object`): String representation
            Severity: Minor
            Found in obj_tables/core.py - About 2 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function cut_relations has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
            Open

                def cut_relations(self, objs_to_keep=None):
                    """ Cut relations to objects not in :obj:`objs`.
            
                    Args:
                        objs_to_keep (:obj:`set` of :obj:`Model`, optional): objects to retain relations to
            Severity: Minor
            Found in obj_tables/core.py - About 2 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function init_related_attributes has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
            Open

                def init_related_attributes(cls, model_cls):
                    """ Initialize related attributes """
                    for attr in model_cls.Meta.attributes.values():
                        if isinstance(attr, RelatedAttribute):
            
            
            Severity: Minor
            Found in obj_tables/core.py - About 2 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function transform has a Cognitive Complexity of 20 (exceeds 5 allowed). Consider refactoring.
            Open

            def transform(filename):
                # read
                wb = openpyxl.load_workbook(filename=filename)
            
                for ws in wb:
            Severity: Minor
            Found in migrations/migration_2019_10_10.py - About 2 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function read_worksheet_metadata has a Cognitive Complexity of 20 (exceeds 5 allowed). Consider refactoring.
            Open

                def read_worksheet_metadata(cls, sheet_name, rows):
                    """ Read worksheet metadata
            
                    Args:
                        sheet_name (:obj:`str`): sheet name
            Severity: Minor
            Found in obj_tables/io.py - About 2 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                        if self.species:
                            for part in parts:
                                species = self.species.get(part.species, None)
                                if not species:
                                    raise ValueError('Species "{}" must be defined'.format(part.species))
            Severity: Major
            Found in obj_tables/chem/core.py and 1 other location - About 2 hrs to fix
            obj_tables/chem/core.py on lines 675..680

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 58.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                        if self.compartments:
                            for part in parts:
                                compartment = self.compartments.get(part.compartment, None)
                                if not compartment:
                                    raise ValueError('Compartment "{}" must be defined'.format(part.compartment))
            Severity: Major
            Found in obj_tables/chem/core.py and 1 other location - About 2 hrs to fix
            obj_tables/chem/core.py on lines 668..673

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 58.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Function viz_schema has 21 arguments (exceeds 4 allowed). Consider refactoring.
            Open

            def viz_schema(module, filename, attributes=True, tail_labels=True, hidden_classes=None, extra_edges=None,
            Severity: Major
            Found in obj_tables/utils.py - About 2 hrs to fix

              Function standardize has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
              Open

                  def standardize(self):
                      """ Standardize the attributes of a :obj:`MigrationSpec`
              
                      In particular, standardize a :obj:`MigrationSpec` that has been read from a YAML config file
                      """
              Severity: Minor
              Found in obj_tables/migrate.py - About 2 hrs to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function validate has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
              Open

                  def validate(self, obj, value):
                      """ Determine if :obj:`value` is a valid value
              
                      Args:
                          obj (:obj:`Model`): class being validated
              Severity: Minor
              Found in obj_tables/math/numeric.py - About 2 hrs to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function decode_data has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
              Open

              def decode_data(encoded_data):
                  """ Decode a data structure (arbitrary combination of list and dictionaries) that contains
                  dictionaries that represent encoded objects and their relationships, preserving the high-level
                  structure of the data structure. Objects and their relationships should be encoded into the
                  data structure as follows:
              Severity: Minor
              Found in examples/decode_json_data.py - About 2 hrs to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function validate has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
              Open

                  def validate(self, obj, value):
                      """ Determine if :obj:`value` is a valid value of the attribute
              
                      Args:
                          obj (:obj:`Model`): object being validated
              Severity: Minor
              Found in obj_tables/core.py - About 2 hrs to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function __init__ has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
              Open

                  def __init__(self, enum_class, none=False, default=None, default_cleaned_value=None, none_value=None, verbose_name='', description='',
                               primary=False, unique=False, unique_case_insensitive=False):
                      """
                      Args:
                          enum_class (:obj:`type` or :obj:`list`): subclass of :obj:`Enum`, :obj:`list` of enumerated names,
              Severity: Minor
              Found in obj_tables/core.py - About 2 hrs to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Severity
              Category
              Status
              Source
              Language