avocado-framework/avocado

View on GitHub

Showing 895 of 913 total issues

Similar blocks of code found in 5 locations. Consider refactoring.
Open

    def test_tag_double_with_empty(self):
        raw = ":avocado: tags=fast,,network"
        exp = {"fast": None, "network": None}
        self.assertEqual(get_docstring_directives_tags(raw), exp)
Severity: Major
Found in selftests/unit/safeloader_docstring.py and 4 other locations - About 50 mins to fix
selftests/unit/safeloader_docstring.py on lines 107..110
selftests/unit/safeloader_docstring.py on lines 117..120
selftests/unit/safeloader_docstring.py on lines 122..125
selftests/unit/safeloader_docstring.py on lines 142..145

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 36.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 5 locations. Consider refactoring.
Open

    def test_tag_duplicate(self):
        raw = ":avocado: tags=SLOW,disk,disk"
        exp = {"SLOW": None, "disk": None}
        self.assertEqual(get_docstring_directives_tags(raw), exp)
Severity: Major
Found in selftests/unit/safeloader_docstring.py and 4 other locations - About 50 mins to fix
selftests/unit/safeloader_docstring.py on lines 107..110
selftests/unit/safeloader_docstring.py on lines 112..115
selftests/unit/safeloader_docstring.py on lines 117..120
selftests/unit/safeloader_docstring.py on lines 142..145

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 36.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 5 locations. Consider refactoring.
Open

    def test_tag_lowercase_uppercase(self):
        raw = ":avocado: tags=slow,DISK"
        exp = {"slow": None, "DISK": None}
        self.assertEqual(get_docstring_directives_tags(raw), exp)
Severity: Major
Found in selftests/unit/safeloader_docstring.py and 4 other locations - About 50 mins to fix
selftests/unit/safeloader_docstring.py on lines 107..110
selftests/unit/safeloader_docstring.py on lines 112..115
selftests/unit/safeloader_docstring.py on lines 122..125
selftests/unit/safeloader_docstring.py on lines 142..145

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 36.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

                            constraint_name = next(
                                (
                                    i
                                    for i, p in enumerate(parameters)
                                    if p[0] == constraint_data[1]
Severity: Minor
Found in optional_plugins/varianter_cit/avocado_varianter_cit/Parser.py and 1 other location - About 50 mins to fix
optional_plugins/varianter_cit/avocado_varianter_cit/Parser.py on lines 60..62

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 36.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 5 locations. Consider refactoring.
Open

    def test_tag_double(self):
        raw = ":avocado: tags=fast,network"
        exp = {"fast": None, "network": None}
        self.assertEqual(get_docstring_directives_tags(raw), exp)
Severity: Major
Found in selftests/unit/safeloader_docstring.py and 4 other locations - About 50 mins to fix
selftests/unit/safeloader_docstring.py on lines 112..115
selftests/unit/safeloader_docstring.py on lines 117..120
selftests/unit/safeloader_docstring.py on lines 122..125
selftests/unit/safeloader_docstring.py on lines 142..145

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 36.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

                            i
                            for i, p in enumerate(parameters)
                            if p[0] == constraint_data[0]
Severity: Minor
Found in optional_plugins/varianter_cit/avocado_varianter_cit/Parser.py and 1 other location - About 50 mins to fix
optional_plugins/varianter_cit/avocado_varianter_cit/Parser.py on lines 69..73

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 36.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 5 locations. Consider refactoring.
Open

    def test_tag_newline_after(self):
        raw = ":avocado: tags=fast,slow\n:avocado: enable"
        exp = {"fast": None, "slow": None}
        self.assertEqual(get_docstring_directives_tags(raw), exp)
Severity: Major
Found in selftests/unit/safeloader_docstring.py and 4 other locations - About 50 mins to fix
selftests/unit/safeloader_docstring.py on lines 107..110
selftests/unit/safeloader_docstring.py on lines 112..115
selftests/unit/safeloader_docstring.py on lines 117..120
selftests/unit/safeloader_docstring.py on lines 122..125

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 36.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function run has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
Open

    def run(self, runnable):
        # pylint: disable=W0201
        self.runnable = runnable
        yield self.prepare_status("started")

Severity: Minor
Found in avocado/plugins/runners/asset.py - About 45 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Avoid deeply nested control flow statements.
Open

                        if len(c) < len(constraint_array):
                            if set(c) < set(constraint_array):
                                has_subset = True
                                break
                        if len(c) > len(constraint_array):
Severity: Major
Found in optional_plugins/varianter_cit/avocado_varianter_cit/Solver.py - About 45 mins to fix

    Function test_kill_process_tree_dont_timeout_3s has 6 arguments (exceeds 4 allowed). Consider refactoring.
    Open

        def test_kill_process_tree_dont_timeout_3s(
    Severity: Minor
    Found in selftests/unit/utils/process.py - About 45 mins to fix

      Avoid deeply nested control flow statements.
      Open

                              if value == constraint[counter][self.CON_NAME]:
                                  value_array.append([constraint[counter][self.CON_VAL]])
                                  if (counter + 1) != len(constraint):
                                      counter += 1
                              else:
      Severity: Major
      Found in optional_plugins/varianter_cit/avocado_varianter_cit/Solver.py - About 45 mins to fix

        Avoid deeply nested control flow statements.
        Open

                                if regexp.match(key):
                                    remove.append(key)
                            for key in remove:

          Function bootstrap has 6 arguments (exceeds 4 allowed). Consider refactoring.
          Open

              DataTable.ext.renderer.pageButton.bootstrap = function ( settings, host, idx, buttons, page, pages ) {

            Avoid deeply nested control flow statements.
            Open

                                    for pair in range(len(constraint[c])):
                                        constraint_array.add(constraint[c][pair])
                                constraint_array = sorted(
            Severity: Major
            Found in optional_plugins/varianter_cit/avocado_varianter_cit/Solver.py - About 45 mins to fix

              Function change_one_value has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
              Open

                  def change_one_value(self, matrix, row_index=None, column_index=None):
                      """
                      Change one cell inside the matrix
              
                      :param matrix: matrix to be changed
              Severity: Minor
              Found in optional_plugins/varianter_cit/avocado_varianter_cit/Cit.py - About 45 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Avoid deeply nested control flow statements.
              Open

                                      if set(copy[i]).issubset(set(copy[j])):
                                          items_to_remove.add(copy[j])
                      for item in items_to_remove:
              Severity: Major
              Found in optional_plugins/varianter_cit/avocado_varianter_cit/Solver.py - About 45 mins to fix

                Function load_from_tree has 6 arguments (exceeds 4 allowed). Consider refactoring.
                Open

                def load_from_tree(name, version, release, arch, package_type, path):
                Severity: Minor
                Found in avocado/plugins/distro.py - About 45 mins to fix

                  Function _run_unittest has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
                  Open

                      def _run_unittest(cls, module_path, module_class_method, queue):
                          sys.path.insert(0, module_path)
                          stream = io.StringIO()
                  
                          try:
                  Severity: Minor
                  Found in avocado/plugins/runners/python_unittest.py - About 45 mins to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Function test_all_commands has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
                  Open

                      def test_all_commands(self):
                          # Test all commands except "set_target" which is tested elsewhere
                          for cmd, _ in (
                              (c, r)
                              for (c, r) in self.service_command_generator.commands
                  Severity: Minor
                  Found in selftests/unit/utils/service.py - About 45 mins to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Function test_kill_process_tree_timeout_3s has 6 arguments (exceeds 4 allowed). Consider refactoring.
                  Open

                      def test_kill_process_tree_timeout_3s(
                  Severity: Minor
                  Found in selftests/unit/utils/process.py - About 45 mins to fix
                    Severity
                    Category
                    Status
                    Source
                    Language