ComplianceAsCode/content

View on GitHub

Showing 1,039 of 1,039 total issues

Function load_entities has a Cognitive Complexity of 12 (exceeds 7 allowed). Consider refactoring.
Open

    def load_entities(self, rules_by_id, values_by_id, groups_by_id):
        for rid, val in self.rules.items():
            if not val:
                self.rules[rid] = rules_by_id[rid]

Severity: Minor
Found in ssg/build_yaml.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function validate_references has a Cognitive Complexity of 12 (exceeds 7 allowed). Consider refactoring.
Open

    def validate_references(self, yaml_file):
        if self.references is None:
            raise ValueError("Empty references section in file %s" % yaml_file)

        for ref_type, ref_val in self.references.items():
Severity: Minor
Found in ssg/build_yaml.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function __lt__ has a Cognitive Complexity of 12 (exceeds 7 allowed). Consider refactoring.
Open

    def __lt__(self, other):
        comparator = Expression.__lt__(self, other)
        if comparator is not NotImplemented:
            return comparator

Severity: Minor
Found in ssg/ext/boolean/boolean.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

    if args.group:
        if not os.path.isdir(group_path):
            print("ERROR: The specified group '%s' doesn't exist in the '%s' directory" % (
                args.group, OCP_RULE_DIR))
            return 0
Severity: Major
Found in utils/add_kubernetes_rule.py and 1 other location - About 1 hr to fix
utils/add_kubernetes_rule.py on lines 240..244

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 39.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function validate_playbook has a Cognitive Complexity of 12 (exceeds 7 allowed). Consider refactoring.
Open

def validate_playbook(playbook, args):
    assert "name" in playbook, "playbook doesn't have a name"
    assert "hosts" in playbook, "playbook doesn't have the hosts entry"
    assert playbook["hosts"] == "@@HOSTS@@", "playbook's hosts is not set to @@HOSTS@@"
    assert "become" in playbook, "playbook doesn't have a become key"
Severity: Minor
Found in tests/assert_ansible_schema.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function get_viable_profiles has a Cognitive Complexity of 12 (exceeds 7 allowed). Consider refactoring.
Open

def get_viable_profiles(selected_profiles, datastream, benchmark, script=None):
    """Read data stream, and return set intersection of profiles of given
    benchmark and those provided in `selected_profiles` parameter.
    """

Severity: Minor
Found in tests/ssg_test_suite/rule.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function set_variables_for_test_scenarios has a Cognitive Complexity of 12 (exceeds 7 allowed). Consider refactoring.
Open

def set_variables_for_test_scenarios(data):
    if data["datatype"] == "int":
        if not data.get("value"):
            # this implies XCCDF variable is used
            data["wrong_value"] = 321
Severity: Minor
Found in shared/templates/sshd_lineinfile/template.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function machine_platform_missing_in_rules has a Cognitive Complexity of 12 (exceeds 7 allowed). Consider refactoring.
Open

def machine_platform_missing_in_rules(ds_path, short_ids_to_check):
    machine_platform_missing = False
    tree = ET.parse(ds_path)
    root = tree.getroot()
    only_rules_query = ".//{%s}Rule" % ssg.constants.XCCDF12_NS
Severity: Minor
Found in tests/test_machine_only_rules.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

    if args.group:
        if not os.path.isdir(group_path):
            print("ERROR: The specified group '%s' doesn't exist in the '%s' directory" % (
                args.group, OCP_RULE_DIR))
            return 0
Severity: Major
Found in utils/add_kubernetes_rule.py and 1 other location - About 1 hr to fix
utils/add_kubernetes_rule.py on lines 206..210

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 39.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function move_patches_up_to_date_to_source_data_stream_component has 26 lines of code (exceeds 25 allowed). Consider refactoring.
Open

def move_patches_up_to_date_to_source_data_stream_component(datastreamtree):
    ds_checklists = datastreamtree.find(
        ".//{%s}checklists" % datastream_namespace)
    checklists_component_ref = ds_checklists.find(
        "{%s}component-ref" % datastream_namespace)
Severity: Minor
Found in build-scripts/compose_ds.py - About 1 hr to fix

    Function __init__ has 8 arguments (exceeds 4 allowed). Consider refactoring.
    Open

        def __init__(self, TRUE_class=None, FALSE_class=None, Symbol_class=None, Function_class=None,
    Severity: Major
    Found in ssg/ext/boolean/boolean.py - About 1 hr to fix

      Function reference_check has 8 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def reference_check(env_yaml, rule_dirs, profile_path, product, product_yaml, reference,
      Severity: Major
      Found in utils/refchecker.py - About 1 hr to fix

        Function handle_control has 8 arguments (exceeds 4 allowed). Consider refactoring.
        Open

        def handle_control(product: str, control: ssg.controls.Control, env_yaml: ssg.environment,
        Severity: Major
        Found in utils/create_srg_export.py - About 1 hr to fix

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                  try:
                      process_remediation(
                          rule, fix_path, lang, output_dirs, expected_file_name, env_yaml, cpe_platforms)
                  except Exception as exc:
                      msg = (
          Severity: Major
          Found in build-scripts/collect_remediations.py and 1 other location - About 1 hr to fix
          ssg/build_yaml.py on lines 1133..1141

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 38.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                  if ansible_fix_present and not ansible_fix_has_machine_conditional:
                      sys.stderr.write(
                          "Rule %s in %s is missing a machine conditional in Ansible remediation\n" %
                          (elem_short_id, ds_path))
                      machine_platform_missing = True
          Severity: Major
          Found in tests/test_machine_only_rules.py and 1 other location - About 1 hr to fix
          tests/test_machine_only_rules.py on lines 112..116

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 38.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                      try:
                          new_items = make_items_product_specific(
                              dic, product_suffix, allow_overwrites)
                      except ValueError as exc:
                          msg = (
          Severity: Major
          Found in ssg/build_yaml.py and 1 other location - About 1 hr to fix
          build-scripts/collect_remediations.py on lines 114..121

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 38.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                  if bash_fix_present and not bash_fix_has_machine_conditional:
                      sys.stderr.write(
                          "Rule %s in %s is missing a machine conditional in Bash remediation\n" %
                          (elem_short_id, ds_path))
                      machine_platform_missing = True
          Severity: Major
          Found in tests/test_machine_only_rules.py and 1 other location - About 1 hr to fix
          tests/test_machine_only_rules.py on lines 107..111

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 38.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Function get_rule_dir_ovals has a Cognitive Complexity of 11 (exceeds 7 allowed). Consider refactoring.
          Open

          def get_rule_dir_ovals(dir_path, product=None):
              """
              Gets a list of OVALs contained in a rule directory.
          
              If product is None, returns all OVALs. Only returns OVALs which exist.
          Severity: Minor
          Found in ssg/rules.py - About 55 mins to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function parse_from_file_with_jinja has a Cognitive Complexity of 11 (exceeds 7 allowed). Consider refactoring.
          Open

              def parse_from_file_with_jinja(self, env_yaml, cpe_platforms):
                  self.local_env_yaml.update(env_yaml)
                  result = super(BashRemediation, self).parse_from_file_with_jinja(
                      self.local_env_yaml, cpe_platforms)
          
          
          Severity: Minor
          Found in ssg/build_remediations.py - About 55 mins to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function add_profiles_from_dir has a Cognitive Complexity of 11 (exceeds 7 allowed). Consider refactoring.
          Open

              def add_profiles_from_dir(self, dir_, env_yaml, product_cpes):
                  for dir_item in sorted(os.listdir(dir_)):
                      dir_item_path = os.path.join(dir_, dir_item)
                      if not os.path.isfile(dir_item_path):
                          continue
          Severity: Minor
          Found in ssg/build_yaml.py - About 55 mins to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Severity
          Category
          Status
          Source
          Language