ComplianceAsCode/content

View on GitHub

Showing 1,039 of 1,039 total issues

Function has_duplicated_subkeys has a Cognitive Complexity of 23 (exceeds 7 allowed). Consider refactoring.
Open

def has_duplicated_subkeys(file_path, file_contents, sections):
    """
    Checks whether a section has duplicated keys in a YAML file.

    Note that these duplicated keys are silently ignored by the YAML parser used.
Severity: Minor
Found in ssg/rule_yaml.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function determine_ip has a Cognitive Complexity of 23 (exceeds 7 allowed). Consider refactoring.
Open

def determine_ip(domain):
    GUEST_AGENT_XML = ("<channel type='unix'>"
                       "  <source mode='bind'/>"
                       "  <target type='virtio'"
                       "          name='org.qemu.guest_agent.0'"
Severity: Minor
Found in tests/ssg_test_suite/virt.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Cyclomatic complexity is too high in method absorb. (17)
Open

    def absorb(self, args):
        """
        Given an `args` sequence of expressions, return a new list of expression
        applying absorption and negative absorption.

Severity: Minor
Found in ssg/ext/boolean/boolean.py by radon

Cyclomatic Complexity

Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

Construct Effect on CC Reasoning
if +1 An if statement is a single decision.
elif +1 The elif statement adds another decision.
else +0 The else statement does not cause a new decision. The decision is at the if.
for +1 There is a decision at the start of the loop.
while +1 There is a decision at the while statement.
except +1 Each except branch adds a new conditional path of execution.
finally +0 The finally block is unconditionally executed.
with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
assert +1 The assert statement internally roughly equals a conditional statement.
Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

Source: http://radon.readthedocs.org/en/latest/intro.html

Cyclomatic complexity is too high in function createPlatformRuleFunc. (17)
Open

@needs_oc
def createPlatformRuleFunc(args):
    url = args.url
    retries = 0
    namespace_flag = ''
Severity: Minor
Found in utils/add_kubernetes_rule.py by radon

Cyclomatic Complexity

Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

Construct Effect on CC Reasoning
if +1 An if statement is a single decision.
elif +1 The elif statement adds another decision.
else +0 The else statement does not cause a new decision. The decision is at the if.
for +1 There is a decision at the start of the loop.
while +1 There is a decision at the while statement.
except +1 Each except branch adds a new conditional path of execution.
finally +0 The finally block is unconditionally executed.
with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
assert +1 The assert statement internally roughly equals a conditional statement.
Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

Source: http://radon.readthedocs.org/en/latest/intro.html

File automatus.py has 433 lines of code (exceeds 400 allowed). Consider refactoring.
Open

#!/usr/bin/python3
from __future__ import print_function

import argparse
import contextlib
Severity: Minor
Found in tests/automatus.py - About 2 hrs to fix

    Function get_profile_stats has 67 lines of code (exceeds 25 allowed). Consider refactoring.
    Open

        def get_profile_stats(self, profile):
            """Obtain statistics for the profile"""
    
            # Holds the intermediary statistics for profile
            profile_stats = {
    Severity: Major
    Found in ssg/build_profile.py - About 2 hrs to fix

      Cyclomatic complexity is too high in function _get_implied_properties. (16)
      Open

      def _get_implied_properties(existing_properties):
          """
          Generate a dictionary of properties with default values for missing keys.
      
          This function takes an existing dictionary of properties and adds default values for certain
      Severity: Minor
      Found in ssg/products.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Cyclomatic complexity is too high in function sort_section_keys. (16)
      Open

      def sort_section_keys(file_path, file_contents, sections, sort_func=None):
          """
          Sort subkeys in a YAML file's section.
      
          Args:
      Severity: Minor
      Found in ssg/rule_yaml.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Function ssg_xccdf_stigid_mapping has a Cognitive Complexity of 21 (exceeds 7 allowed). Consider refactoring.
      Open

      def ssg_xccdf_stigid_mapping(ssgtree):
          xccdf_ns = ssg.xml.determine_xccdf_tree_namespace(ssgtree)
          xccdftostig_idmapping = {}
      
          for rule in ssgtree.findall(".//{%s}Rule" % xccdf_ns):
      Severity: Minor
      Found in utils/create-stig-overlay.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function main has a Cognitive Complexity of 21 (exceeds 7 allowed). Consider refactoring.
      Open

      def main():
          args = parse_args()
      
          product_allowlist = set(PRODUCT_ALLOWLIST)
          profile_allowlist = set(PROFILE_ALLOWLIST)
      Severity: Minor
      Found in utils/ansible_playbook_to_role.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function main has 64 lines of code (exceeds 25 allowed). Consider refactoring.
      Open

      def main():
          parser = argparse.ArgumentParser(
              description="Test Jinja macros that Generate OVAL")
          parser.add_argument(
              "--verbose", action="store_true", default=False,
      Severity: Major
      Found in tests/test_macros_oval.py - About 2 hrs to fix

        PlaybookToRoleConverter has 23 functions (exceeds 20 allowed). Consider refactoring.
        Open

        class PlaybookToRoleConverter():
            PRODUCED_FILES = ('defaults/main.yml', 'meta/main.yml', 'tasks/main.yml', 'vars/main.yml',
                              'README.md')
        
            def __init__(self, local_playbook_filename):
        Severity: Minor
        Found in utils/ansible_playbook_to_role.py - About 2 hrs to fix

          Similar blocks of code found in 6 locations. Consider refactoring.
          Open

                          if profile_stats['missing_puppet_fixes']:
                              print("*** rules of '%s' profile missing "
                                    "a puppet fix script: %d of %d [%d%% complete]"
                                    % (profile, rules_count - impl_puppet_fixes_count,
                                       rules_count,
          Severity: Major
          Found in ssg/build_profile.py and 5 other locations - About 2 hrs to fix
          ssg/build_profile.py on lines 570..577
          ssg/build_profile.py on lines 579..586
          ssg/build_profile.py on lines 588..595
          ssg/build_profile.py on lines 597..604
          ssg/build_profile.py on lines 615..622

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 56.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 6 locations. Consider refactoring.
          Open

                          if profile_stats['missing_anaconda_fixes']:
                              print("*** rules of '%s' profile missing "
                                    "a anaconda fix script: %d of %d [%d%% complete]"
                                    % (profile, rules_count - impl_anaconda_fixes_count,
                                       rules_count,
          Severity: Major
          Found in ssg/build_profile.py and 5 other locations - About 2 hrs to fix
          ssg/build_profile.py on lines 570..577
          ssg/build_profile.py on lines 579..586
          ssg/build_profile.py on lines 588..595
          ssg/build_profile.py on lines 597..604
          ssg/build_profile.py on lines 606..613

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 56.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 6 locations. Consider refactoring.
          Open

                          if profile_stats['missing_ansible_fixes']:
                              print("*** rules of '%s' profile missing "
                                    "a ansible fix script: %d of %d [%d%% complete]"
                                    % (profile, rules_count - impl_ansible_fixes_count,
                                       rules_count,
          Severity: Major
          Found in ssg/build_profile.py and 5 other locations - About 2 hrs to fix
          ssg/build_profile.py on lines 570..577
          ssg/build_profile.py on lines 588..595
          ssg/build_profile.py on lines 597..604
          ssg/build_profile.py on lines 606..613
          ssg/build_profile.py on lines 615..622

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 56.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 6 locations. Consider refactoring.
          Open

                          if profile_stats['missing_bash_fixes']:
                              print("*** rules of '%s' profile missing "
                                    "a bash fix script: %d of %d [%d%% complete]"
                                    % (profile, rules_count - impl_bash_fixes_count,
                                       rules_count,
          Severity: Major
          Found in ssg/build_profile.py and 5 other locations - About 2 hrs to fix
          ssg/build_profile.py on lines 579..586
          ssg/build_profile.py on lines 588..595
          ssg/build_profile.py on lines 597..604
          ssg/build_profile.py on lines 606..613
          ssg/build_profile.py on lines 615..622

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 56.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          GenericRunner has 23 functions (exceeds 20 allowed). Consider refactoring.
          Open

          class GenericRunner(object):
              def __init__(self, environment, profile, datastream, benchmark_id):
                  self.environment = environment
                  self.profile = profile
                  self.datastream = datastream
          Severity: Minor
          Found in tests/ssg_test_suite/oscap.py - About 2 hrs to fix

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                def get_level(self, level_id):
                    """
                    Retrieve a level by its ID.
            
                    Args:
            Severity: Major
            Found in ssg/controls.py and 1 other location - About 2 hrs to fix
            ssg/controls.py on lines 665..685

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 56.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 6 locations. Consider refactoring.
            Open

                            if profile_stats['missing_ignition_fixes']:
                                print("*** rules of '%s' profile missing "
                                      "a ignition fix script: %d of %d [%d%% complete]"
                                      % (profile, rules_count - impl_ignition_fixes_count,
                                         rules_count,
            Severity: Major
            Found in ssg/build_profile.py and 5 other locations - About 2 hrs to fix
            ssg/build_profile.py on lines 570..577
            ssg/build_profile.py on lines 579..586
            ssg/build_profile.py on lines 597..604
            ssg/build_profile.py on lines 606..613
            ssg/build_profile.py on lines 615..622

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 56.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                def get_control(self, control_id):
                    """
                    Retrieve a control by its ID.
            
                    Args:
            Severity: Major
            Found in ssg/controls.py and 1 other location - About 2 hrs to fix
            ssg/controls.py on lines 687..707

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 56.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Severity
            Category
            Status
            Source
            Language