borgbackup/borg

View on GitHub

Showing 611 of 611 total issues

Function get_write_fd has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
Open

    def get_write_fd(self, no_new=False, want_new=False, raise_full=False):
        if not no_new and (want_new or self.offset and self.offset > self.limit):
            if raise_full:
                raise self.SegmentFull
            self.close_segment()
Severity: Minor
Found in src/borg/legacyrepository.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function _parse_braces has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
Open

def _parse_braces(pat):
    """Returns the index values of paired braces in `pat` as a list of tuples.

    The dict's keys are the indexes corresponding to opening braces. Initially,
    they are set to a value of `None`. Once a corresponding closing brace is found,
Severity: Minor
Found in src/borg/helpers/shellpattern.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function add_common_group has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
Open

        def add_common_group(self, parser, suffix, provide_defaults=False):
            """
            Add common options to *parser*.

            *provide_defaults* must only be True exactly once in a parser hierarchy,
Severity: Minor
Found in src/borg/archiver/__init__.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function do_debug_dump_archive has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
Open

    def do_debug_dump_archive(self, args, repository, manifest):
        """dump decoded archive metadata (not: data)"""
        archive_info = manifest.archives.get_one([args.name])
        repo_objs = manifest.repo_objs
        try:
Severity: Minor
Found in src/borg/archiver/debug_cmd.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function do_check has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
Open

    def do_check(self, args, repository):
        """Check repository consistency"""
        if args.repair:
            msg = (
                "This is a potentially dangerous function.\n"
Severity: Minor
Found in src/borg/archiver/check_cmd.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function test_race_condition has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
Open

    def test_race_condition(self, lockpath):
        class SynchronizedCounter:
            def __init__(self, count=0):
                self.lock = ThreadingLock()
                self.count = count
Severity: Minor
Found in src/borg/testsuite/fslocking_test.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        self.assert_equal(
            cf(Chunker(2, 2, CHUNK_MAX_EXP, 2, 3).chunkify(BytesIO(b"foobarboobaz" * 3))),
            [b"foob", b"arboobaz", b"foob", b"arboobaz", b"foob", b"arboobaz"],
Severity: Major
Found in src/borg/testsuite/chunker_test.py and 2 other locations - About 1 hr to fix
src/borg/testsuite/chunker_test.py on lines 87..89
src/borg/testsuite/chunker_test.py on lines 102..104

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 45.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        with patch.object(xattr, "setxattr", patched_setxattr_ENOTSUP):
            out = cmd(archiver, "extract", "test", exit_code=EXIT_WARNING)
            assert "ENOTSUP" in out
            assert "when setting extended attribute user.attribute" in out
Severity: Major
Found in src/borg/testsuite/archiver/extract_cmd_test.py and 2 other locations - About 1 hr to fix
src/borg/testsuite/archiver/extract_cmd_test.py on lines 558..561
src/borg/testsuite/archiver/extract_cmd_test.py on lines 570..573

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 45.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        with patch.object(xattr, "setxattr", patched_setxattr_EACCES):
            out = cmd(archiver, "extract", "test", exit_code=EXIT_WARNING)
            assert "EACCES" in out
            assert "when setting extended attribute user.attribute" in out
Severity: Major
Found in src/borg/testsuite/archiver/extract_cmd_test.py and 2 other locations - About 1 hr to fix
src/borg/testsuite/archiver/extract_cmd_test.py on lines 558..561
src/borg/testsuite/archiver/extract_cmd_test.py on lines 564..567

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 45.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        self.assert_equal(
            cf(Chunker(0, 1, CHUNK_MAX_EXP, 2, 2).chunkify(BytesIO(b"foobarboobaz" * 3))),
            [b"fooba", b"rboobaz", b"fooba", b"rboobaz", b"fooba", b"rboobaz"],
Severity: Major
Found in src/borg/testsuite/chunker_test.py and 2 other locations - About 1 hr to fix
src/borg/testsuite/chunker_test.py on lines 102..104
src/borg/testsuite/chunker_test.py on lines 106..108

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 45.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        with patch.object(xattr, "setxattr", patched_setxattr_E2BIG):
            out = cmd(archiver, "extract", "test", exit_code=EXIT_WARNING)
            assert "too big for this filesystem" in out
            assert "when setting extended attribute user.attribute" in out
Severity: Major
Found in src/borg/testsuite/archiver/extract_cmd_test.py and 2 other locations - About 1 hr to fix
src/borg/testsuite/archiver/extract_cmd_test.py on lines 564..567
src/borg/testsuite/archiver/extract_cmd_test.py on lines 570..573

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 45.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

        self.assert_equal(
            cf(Chunker(1, 2, CHUNK_MAX_EXP, 2, 3).chunkify(BytesIO(b"foobarboobaz" * 3))),
            [b"foobar", b"boobazfo", b"obar", b"boobazfo", b"obar", b"boobaz"],
Severity: Major
Found in src/borg/testsuite/chunker_test.py and 2 other locations - About 1 hr to fix
src/borg/testsuite/chunker_test.py on lines 87..89
src/borg/testsuite/chunker_test.py on lines 106..108

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 45.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function do_benchmark_cpu has 39 lines of code (exceeds 25 allowed). Consider refactoring.
Open

    def do_benchmark_cpu(self, args):
        """Benchmark CPU bound operations."""
        from timeit import timeit

        random_10M = os.urandom(10 * 1000 * 1000)
Severity: Minor
Found in src/borg/archiver/benchmark_cmd.py - About 1 hr to fix

    Function build_parser_create has 39 lines of code (exceeds 25 allowed). Consider refactoring.
    Open

        def build_parser_create(self, subparsers, common_parser, mid_common_parser):
            from ._common import process_epilog
            from ._common import define_exclusion_group
    
            create_epilog = process_epilog(
    Severity: Minor
    Found in src/borg/archiver/create_cmd.py - About 1 hr to fix

      Function test_basic_functionality has 38 lines of code (exceeds 25 allowed). Consider refactoring.
      Open

      def test_basic_functionality(archivers, request):
          archiver = request.getfixturevalue(archivers)
          # Setup files for the first snapshot
          create_regular_file(archiver.input_path, "empty", size=0)
          create_regular_file(archiver.input_path, "file_unchanged", size=128)
      Severity: Minor
      Found in src/borg/testsuite/archiver/diff_cmd_test.py - About 1 hr to fix

        Function test_prune_repository_example has 38 lines of code (exceeds 25 allowed). Consider refactoring.
        Open

        def test_prune_repository_example(archivers, request):
            archiver = request.getfixturevalue(archivers)
            cmd(archiver, "repo-create", RK_ENCRYPTION)
            # Archives that will be kept, per the example
            # Oldest archive
        Severity: Minor
        Found in src/borg/testsuite/archiver/prune_cmd_test.py - About 1 hr to fix

          Function __init__ has 12 arguments (exceeds 4 allowed). Consider refactoring.
          Open

              def __init__(
          Severity: Major
          Found in src/borg/helpers/msgpack.py - About 1 hr to fix

            Function __init__ has 12 arguments (exceeds 4 allowed). Consider refactoring.
            Open

                def __init__(
            Severity: Major
            Found in src/borg/archive.py - About 1 hr to fix

              Function write_options_group has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
              Open

                  def write_options_group(self, group, fp, with_title=True, base_indent=4):
                      def is_positional_group(group):
                          return any(not o.option_strings for o in group._group_actions)
              
                      indent = " " * base_indent
              Severity: Minor
              Found in scripts/make.py - About 1 hr to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function setup_logging has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
              Open

              def setup_logging(
                  stream=None, conf_fname=None, env_var="BORG_LOGGING_CONF", level="info", is_serve=False, log_json=False, func=None
              ):
                  """setup logging module according to the arguments provided
              
              
              Severity: Minor
              Found in src/borg/logger.py - About 1 hr to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Severity
              Category
              Status
              Source
              Language