saltstack/salt

View on GitHub
salt/utils/schema.py

Summary

Maintainability
F
1 wk
Test Coverage

File schema.py has 1233 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# -*- coding: utf-8 -*-
'''
    :codeauthor: Pedro Algarvio (pedro@algarvio.me)
    :codeauthor: Alexandru Bleotu (alexandru.bleotu@morganstanley.com)

Severity: Major
Found in salt/utils/schema.py - About 3 days to fix

    Function serialize has a Cognitive Complexity of 73 (exceeds 5 allowed). Consider refactoring.
    Open

        def serialize(cls, id_=None):
            # The order matters
            serialized = OrderedDict()
            if id_ is not None:
                # This is meant as a configuration section, sub json schema
    Severity: Minor
    Found in salt/utils/schema.py - About 1 day to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __validate_attributes__ has a Cognitive Complexity of 24 (exceeds 5 allowed). Consider refactoring.
    Open

        def __validate_attributes__(self):
            if not self.properties and not self.pattern_properties and not self.additional_properties:
                raise RuntimeError(
                    'One of properties, pattern_properties or additional_properties must be passed'
                )
    Severity: Minor
    Found in salt/utils/schema.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __new__ has a Cognitive Complexity of 23 (exceeds 5 allowed). Consider refactoring.
    Open

            def __new__(mcs, name, bases, attributes):
                try:
                    constructor = attributes["__new__"]
                except KeyError:
                    return type.__new__(mcs, name, bases, attributes)
    Severity: Minor
    Found in salt/utils/schema.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Cyclomatic complexity is too high in method serialize. (29)
    Open

        @classmethod
        def serialize(cls, id_=None):
            # The order matters
            serialized = OrderedDict()
            if id_ is not None:
    Severity: Minor
    Found in salt/utils/schema.py by radon

    Cyclomatic Complexity

    Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

    Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

    Construct Effect on CC Reasoning
    if +1 An if statement is a single decision.
    elif +1 The elif statement adds another decision.
    else +0 The else statement does not cause a new decision. The decision is at the if.
    for +1 There is a decision at the start of the loop.
    while +1 There is a decision at the while statement.
    except +1 Each except branch adds a new conditional path of execution.
    finally +0 The finally block is unconditionally executed.
    with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
    assert +1 The assert statement internally roughly equals a conditional statement.
    Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
    Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

    Source: http://radon.readthedocs.org/en/latest/intro.html

    Function __new__ has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring.
    Open

        def __new__(mcs, name, bases, attrs):
            # Mark the instance as a configuration document/section
            attrs['__config__'] = True
            attrs['__flatten__'] = False
            attrs['__config_name__'] = None
    Severity: Minor
    Found in salt/utils/schema.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function serialize has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
    Open

        def serialize(cls, id_=None):
            # Get the initial serialization
            serialized = super(DefinitionsSchema, cls).serialize(id_)
            complex_items = []
            # Augment the serializations with the definitions of all complex items
    Severity: Minor
    Found in salt/utils/schema.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function serialize has a Cognitive Complexity of 15 (exceeds 5 allowed). Consider refactoring.
    Open

        def serialize(self):
            result = super(DictItem, self).serialize()
            required = []
            if self.properties is not None:
                if isinstance(self.properties, Schema):
    Severity: Minor
    Found in salt/utils/schema.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __new__ has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
    Open

        def __new__(mcs, name, bases, attrs):
            # Register the class as an item class
            attrs['__item__'] = True
            attrs['__item_name__'] = None
            # Instantiate an empty list to store the config item attribute names
    Severity: Minor
    Found in salt/utils/schema.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __call__ has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
    Open

        def __call__(cls, *args, **kwargs):
            # Create the instance class
            instance = object.__new__(cls)
            if args:
                raise RuntimeError(
    Severity: Minor
    Found in salt/utils/schema.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __validate_attributes__ has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
    Open

        def __validate_attributes__(self):
            if not self.items and not self.additional_items:
                raise RuntimeError(
                    'One of items or additional_items must be passed.'
                )
    Severity: Minor
    Found in salt/utils/schema.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function defaults has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
    Open

        def defaults(cls):
            serialized = cls.serialize()
            defaults = {}
            for name, details in serialized['properties'].items():
                if 'default' in details:
    Severity: Minor
    Found in salt/utils/schema.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __validate_attributes__ has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
    Open

        def __validate_attributes__(self):
            if self.enum is not None:
                if not isinstance(self.enum, (list, tuple, set)):
                    raise RuntimeError(
                        'Only the \'list\', \'tuple\' and \'set\' python types can be used '
    Severity: Minor
    Found in salt/utils/schema.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function serialize has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
    Open

        def serialize(self):
            '''
            Return a serializable form of the config instance
            '''
            serialized = {'type': self.__type__}
    Severity: Minor
    Found in salt/utils/schema.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __validate_attributes__ has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
    Open

        def __validate_attributes__(self):
            if self.requirements is None:
                raise RuntimeError(
                    'The passed requirements must not be empty'
                )
    Severity: Minor
    Found in salt/utils/schema.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Avoid deeply nested control flow statements.
    Open

                            if isinstance(after_items_update[name], list):
                                after_items_update[name].extend(data)
                        else:
    Severity: Major
    Found in salt/utils/schema.py - About 45 mins to fix

      Function get_definition has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
      Open

          def get_definition(self):
              '''Returns the definition of the complex item'''
      
              serialized = super(ComplexSchemaItem, self).serialize()
              # Adjust entries in the serialization
      Severity: Minor
      Found in salt/utils/schema.py - About 45 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function serialize has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
      Open

          def serialize(self):
              if isinstance(self.requirements, SchemaItem):
                  requirements = self.requirements.serialize()
              else:
                  requirements = []
      Severity: Minor
      Found in salt/utils/schema.py - About 35 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function __validate_attributes__ has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
      Open

          def __validate_attributes__(self):
              if not self.items:
                  raise RuntimeError(
                      'The passed items must not be empty'
                  )
      Severity: Minor
      Found in salt/utils/schema.py - About 25 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Similar blocks of code found in 4 locations. Consider refactoring.
      Open

          def __init__(self,
                       multiple_of=None,
                       minimum=None,
                       exclusive_minimum=None,
                       maximum=None,
      Severity: Major
      Found in salt/utils/schema.py and 3 other locations - About 7 hrs to fix
      salt/utils/schema.py on lines 779..803
      salt/utils/schema.py on lines 1094..1140
      salt/utils/schema.py on lines 1192..1244

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 119.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 4 locations. Consider refactoring.
      Open

          def __init__(self,
                       items=None,
                       min_items=None,
                       max_items=None,
                       unique_items=None,
      Severity: Major
      Found in salt/utils/schema.py and 3 other locations - About 7 hrs to fix
      salt/utils/schema.py on lines 779..803
      salt/utils/schema.py on lines 1031..1071
      salt/utils/schema.py on lines 1192..1244

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 119.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 4 locations. Consider refactoring.
      Open

          def __init__(self, title=None, description=None, default=None, enum=None, enumNames=None, **kwargs):
              '''
              :param required:
                  If the configuration item is required. Defaults to ``False``.
              :param title:
      Severity: Major
      Found in salt/utils/schema.py and 3 other locations - About 7 hrs to fix
      salt/utils/schema.py on lines 1031..1071
      salt/utils/schema.py on lines 1094..1140
      salt/utils/schema.py on lines 1192..1244

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 119.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 4 locations. Consider refactoring.
      Open

          def __init__(self,
                       properties=None,
                       pattern_properties=None,
                       additional_properties=None,
                       min_properties=None,
      Severity: Major
      Found in salt/utils/schema.py and 3 other locations - About 7 hrs to fix
      salt/utils/schema.py on lines 779..803
      salt/utils/schema.py on lines 1031..1071
      salt/utils/schema.py on lines 1094..1140

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 119.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

                  if not isinstance(self.properties, Schema):
                      for key, prop in self.properties.items():
                          if not isinstance(prop, (Schema, SchemaItem)):
                              raise RuntimeError(
                                  'The passed property who\'s key is \'{0}\' must be of type '
      Severity: Major
      Found in salt/utils/schema.py and 1 other location - About 1 hr to fix
      salt/utils/schema.py on lines 1271..1276

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 49.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

                  for key, prop in self.pattern_properties.items():
                      if not isinstance(prop, (Schema, SchemaItem)):
                          raise RuntimeError(
                              'The passed pattern_property who\'s key is \'{0}\' must '
                              'be of type Schema, SchemaItem or BaseSchemaItem, '
      Severity: Major
      Found in salt/utils/schema.py and 1 other location - About 1 hr to fix
      salt/utils/schema.py on lines 1257..1263

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 49.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      There are no issues that match your filters.

      Category
      Status