borgbackup/borg

View on GitHub

Showing 611 of 611 total issues

Function main has a Cognitive Complexity of 31 (exceeds 5 allowed). Consider refactoring.
Open

def main():  # pragma: no cover
    # Make sure stdout and stderr have errors='replace' to avoid unicode
    # issues when print()-ing unicode file names
    sys.stdout = ErrorIgnoringTextIOWrapper(sys.stdout.buffer, sys.stdout.encoding, "replace", line_buffering=True)
    sys.stderr = ErrorIgnoringTextIOWrapper(sys.stderr.buffer, sys.stderr.encoding, "replace", line_buffering=True)
Severity: Minor
Found in src/borg/archiver/__init__.py - About 4 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

@skipif_acls_not_working
def test_default_acl():
    tmpdir = tempfile.mkdtemp()
    assert get_acl(tmpdir) == {}
    set_acl(tmpdir, access=ACCESS_ACL, default=DEFAULT_ACL)
Severity: Major
Found in src/borg/testsuite/platform_linux_test.py and 1 other location - About 4 hrs to fix
src/borg/testsuite/platform_freebsd_test.py on lines 87..93

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 80.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

@skipif_acls_not_working
def test_default_acl():
    tmpdir = tempfile.mkdtemp()
    assert get_acl(tmpdir) == {}
    set_acl(tmpdir, access=ACCESS_ACL, default=DEFAULT_ACL)
Severity: Major
Found in src/borg/testsuite/platform_freebsd_test.py and 1 other location - About 4 hrs to fix
src/borg/testsuite/platform_linux_test.py on lines 73..79

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 80.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function do_repo_delete has a Cognitive Complexity of 30 (exceeds 5 allowed). Consider refactoring.
Open

    def do_repo_delete(self, args, repository):
        """Delete a repository"""
        self.output_list = args.output_list
        dry_run = args.dry_run
        keep_security_info = args.keep_security_info
Severity: Minor
Found in src/borg/archiver/repo_delete_cmd.py - About 4 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

RemoteRepository has 34 functions (exceeds 20 allowed). Consider refactoring.
Open

class RemoteRepository:
    extra_test_args = []  # type: ignore

    class RPCError(Exception):
        def __init__(self, unpacked):
Severity: Minor
Found in src/borg/remote.py - About 4 hrs to fix

    Function acquire has a Cognitive Complexity of 29 (exceeds 5 allowed). Consider refactoring.
    Open

        def acquire(self):
            # goal
            # for exclusive lock: there must be only 1 exclusive lock and no other (exclusive or non-exclusive) locks.
            # for non-exclusive lock: there can be multiple n-e locks, but there must not exist an exclusive lock.
            logger.debug(f"LOCK-ACQUIRE: trying to acquire a lock. exclusive: {self.is_exclusive}.")
    Severity: Minor
    Found in src/borg/storelocking.py - About 4 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function show_progress has a Cognitive Complexity of 29 (exceeds 5 allowed). Consider refactoring.
    Open

        def show_progress(self, item=None, final=False, stream=None, dt=None):
            now = time.monotonic()
            if dt is None or now - self.last_progress > dt:
                self.last_progress = now
                if self.output_json:
    Severity: Minor
    Found in src/borg/archive.py - About 4 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function analyze_archives has a Cognitive Complexity of 29 (exceeds 5 allowed). Consider refactoring.
    Open

        def analyze_archives(self) -> Tuple[Set, Set, int, int, int]:
            """Iterate over all items in all archives, create the dicts id -> size of all used/wanted chunks."""
    
            def use_it(id, *, wanted=False):
                entry = self.chunks.get(id)
    Severity: Minor
    Found in src/borg/archiver/compact_cmd.py - About 4 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _read has a Cognitive Complexity of 28 (exceeds 5 allowed). Consider refactoring.
    Open

        def _read(self, fd, header, segment, offset, acceptable_tags, read_data=True):
            """
            Code shared by read() and iter_objects().
    
            Confidence in returned data:
    Severity: Minor
    Found in src/borg/legacyrepository.py - About 4 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    def test_change_location_to_b2repokey(archivers, request):
        archiver = request.getfixturevalue(archivers)
        cmd(archiver, "repo-create", "--encryption=keyfile-blake2-aes-ocb")
        log = cmd(archiver, "repo-info")
        assert "(key file BLAKE2b" in log
    Severity: Major
    Found in src/borg/testsuite/archiver/key_cmds_test.py and 1 other location - About 4 hrs to fix
    src/borg/testsuite/archiver/key_cmds_test.py on lines 39..46

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 75.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    def test_change_location_to_repokey(archivers, request):
        archiver = request.getfixturevalue(archivers)
        cmd(archiver, "repo-create", KF_ENCRYPTION)
        log = cmd(archiver, "repo-info")
        assert "(key file" in log
    Severity: Major
    Found in src/borg/testsuite/archiver/key_cmds_test.py and 1 other location - About 4 hrs to fix
    src/borg/testsuite/archiver/key_cmds_test.py on lines 29..36

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 75.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    def test_change_location_to_keyfile(archivers, request):
        archiver = request.getfixturevalue(archivers)
        cmd(archiver, "repo-create", RK_ENCRYPTION)
        log = cmd(archiver, "repo-info")
        assert "(repokey" in log
    Severity: Major
    Found in src/borg/testsuite/archiver/key_cmds_test.py and 1 other location - About 4 hrs to fix
    src/borg/testsuite/archiver/key_cmds_test.py on lines 49..56

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 75.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    def test_change_location_to_b2keyfile(archivers, request):
        archiver = request.getfixturevalue(archivers)
        cmd(archiver, "repo-create", "--encryption=repokey-blake2-aes-ocb")
        log = cmd(archiver, "repo-info")
        assert "(repokey BLAKE2b" in log
    Severity: Major
    Found in src/borg/testsuite/archiver/key_cmds_test.py and 1 other location - About 4 hrs to fix
    src/borg/testsuite/archiver/key_cmds_test.py on lines 59..66

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 75.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function do_prune has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
    Open

        def do_prune(self, args, repository, manifest):
            """Prune repository archives according to specified rules"""
            if not any(
                (
                    args.secondly,
    Severity: Minor
    Found in src/borg/archiver/prune_cmd.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    File prune_cmd.py has 329 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    import argparse
    from collections import OrderedDict
    from datetime import datetime, timezone, timedelta
    import logging
    from operator import attrgetter
    Severity: Minor
    Found in src/borg/archiver/prune_cmd.py - About 3 hrs to fix

      File process.py has 328 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      import contextlib
      import os
      import os.path
      import shlex
      import signal
      Severity: Minor
      Found in src/borg/helpers/process.py - About 3 hrs to fix

        Similar blocks of code found in 2 locations. Consider refactoring.
        Open

        def test_spoof_manifest(key):
            repo_objs = RepoObj(key)
            data = b"fake or malicious manifest data"  # file content could be provided by attacker.
            id = repo_objs.id_hash(data)
            # create a repo object containing user data (file content data).
        Severity: Major
        Found in src/borg/testsuite/repoobj_test.py and 1 other location - About 3 hrs to fix
        src/borg/testsuite/repoobj_test.py on lines 121..130

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 72.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Similar blocks of code found in 2 locations. Consider refactoring.
        Open

        def test_spoof_archive(key):
            repo_objs = RepoObj(key)
            data = b"fake or malicious archive data"  # file content could be provided by attacker.
            id = repo_objs.id_hash(data)
            # create a repo object containing user data (file content data).
        Severity: Major
        Found in src/borg/testsuite/repoobj_test.py and 1 other location - About 3 hrs to fix
        src/borg/testsuite/repoobj_test.py on lines 109..118

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 72.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        File check_cmd_test.py has 325 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        from datetime import datetime, timezone, timedelta
        import shutil
        from unittest.mock import patch
        
        import pytest
        Severity: Minor
        Found in src/borg/testsuite/archiver/check_cmd_test.py - About 3 hrs to fix

          Function create_helper has a Cognitive Complexity of 26 (exceeds 5 allowed). Consider refactoring.
          Open

              def create_helper(self, tarinfo, status=None, type=None):
                  ph = tarinfo.pax_headers
                  if ph and "BORG.item.version" in ph:
                      assert ph["BORG.item.version"] == "1"
                      meta_bin = base64.b64decode(ph["BORG.item.meta"])
          Severity: Minor
          Found in src/borg/archive.py - About 3 hrs to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Severity
          Category
          Status
          Source
          Language