dammy/core.py

Summary

Maintainability
D
2 days
Test Coverage

File core.py has 834 lines of code (exceeds 250 allowed). Consider refactoring.
Wontfix

"""
This module contains the most important dammy classes. Ideally it should be separated
in 2 modules (core, db, dataset_generator) but the dependencies between them make this
impossible without causing circular imports
"""
Severity: Major
Found in dammy/core.py - About 2 days to fix

    Function to_sql has a Cognitive Complexity of 23 (exceeds 5 allowed). Consider refactoring.
    Open

        def to_sql(self, save_to=None, create_tables=True):
            """
            Gets the dataset as SQL INSERT statements. The generated SQL is always returned and if save_to is specified,
            it is saved to that location. Additional CREATE TABLE statements are added if create_tables is set to True
    
    
    Severity: Minor
    Found in dammy/core.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    BaseGenerator has 27 functions (exceeds 20 allowed). Consider refactoring.
    Wontfix

    class BaseGenerator:
        DAMMY_LOCALIZATION = LOCALIZATION
        """
        The base class from which all generators must inherit.
        """
    Severity: Minor
    Found in dammy/core.py - About 3 hrs to fix

      Function generate_raw has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
      Wontfix

          def generate_raw(self, dataset=None, localization=None):
              """
              Generates a value and performs the operation.
              It will raise a TypeError if the operator is invalid.
      
      
      Severity: Minor
      Found in dammy/core.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function generate_raw has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
      Wontfix

          def generate_raw(self, dataset=None, localization=None):
              """
              Gets all the attributes of the class and generates a new value.
      
              Implementation of the generate_raw() method from BaseGenerator.
      Severity: Minor
      Found in dammy/core.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function generate_raw has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
      Wontfix

          def generate_raw(self, dataset=None, localization=None):
              """
              Generate a value and call the specified method on the generated value
      
              Implementation of the generate_raw() method from BaseGenerator.
      Severity: Minor
      Found in dammy/core.py - About 55 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
      Wontfix

          def __init__(self, a, b, op, sql):
      Severity: Minor
      Found in dammy/core.py - About 35 mins to fix

        Function __generate_using has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
        Wontfix

            def __generate_using(self, method, dataset=None, localization=None):
                generated = []
                for x in self.fields.values():
                    generate_method = getattr(x, method)
                    generated.append(generate_method(dataset, localization))
        Severity: Minor
        Found in dammy/core.py - About 35 mins to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function generate_raw has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
        Wontfix

            def generate_raw(self, dataset=None, localization=None):
                """
                Gets the values corresponding to the key from the given dataset. If the dataset is not specified,
                a DatasetRequiredException will be raised.
        
        
        Severity: Minor
        Found in dammy/core.py - About 35 mins to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function _get_operand_value has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
        Wontfix

            def _get_operand_value(op, dataset=None):
                """
                Get the value of the operand. If it is a generator, the value of the operand
                will be a value generated by the generator. If it is not, the value will be
                the input value.
        Severity: Minor
        Found in dammy/core.py - About 35 mins to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function generate_raw has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
        Wontfix

            def generate_raw(self, dataset=None, localization=None):
                """
                Generate a new dataset with the previously given specifications
        
                Implementation of the generate_raw() method from BaseGenerator.
        Severity: Minor
        Found in dammy/core.py - About 25 mins to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Similar blocks of code found in 2 locations. Consider refactoring.
        Open

                        elif isinstance(col_obj, PrimaryKey):
                            # Add the columns
                            tables[name]['columns'].extend(col_obj.fields.keys())
                            tables[name]['column_types'].extend([x._sql_equivalent for x in col_obj.fields.values()])
        
        
        Severity: Major
        Found in dammy/core.py and 1 other location - About 6 hrs to fix
        dammy/core.py on lines 980..989

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 105.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Similar blocks of code found in 2 locations. Consider refactoring.
        Open

                        elif isinstance(col_obj, Unique):
                            # Add the columns
                            tables[name]['columns'].extend(col_obj.fields.keys())
                            tables[name]['column_types'].extend([x._sql_equivalent for x in col_obj.fields.values()])
        
        
        Severity: Major
        Found in dammy/core.py and 1 other location - About 6 hrs to fix
        dammy/core.py on lines 967..976

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 105.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Similar blocks of code found in 2 locations. Consider refactoring.
        Wontfix

                self._fixed_counters = dict((v[0].__name__, v[1]) for v in args)
        Severity: Minor
        Found in dammy/core.py and 1 other location - About 50 mins to fix
        dammy/core.py on lines 856..856

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 36.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Identical blocks of code found in 2 locations. Consider refactoring.
        Open

                    for x in self.fields.values():
                        generate_method = getattr(x, method)
                        generated.append(generate_method(dataset, localization))
        Severity: Minor
        Found in dammy/core.py and 1 other location - About 50 mins to fix
        dammy/core.py on lines 659..661

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 36.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Similar blocks of code found in 2 locations. Consider refactoring.
        Wontfix

                self._name_class_map = dict((v[0].__name__, v[0]) for v in args)
        Severity: Minor
        Found in dammy/core.py and 1 other location - About 50 mins to fix
        dammy/core.py on lines 855..855

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 36.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Identical blocks of code found in 2 locations. Consider refactoring.
        Open

                for x in self.fields.values():
                    generate_method = getattr(x, method)
                    generated.append(generate_method(dataset, localization))
        Severity: Minor
        Found in dammy/core.py and 1 other location - About 50 mins to fix
        dammy/core.py on lines 667..669

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 36.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        There are no issues that match your filters.

        Category
        Status