borgbackup/borg

View on GitHub

Showing 611 of 611 total issues

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    while i < end_p2:
        grow_factor = get_grow_factor(i)
        p = find_bigger_prime(gen, i)
        sizes.append(p)
        i = int(i * grow_factor)
Severity: Major
Found in scripts/hash_sizes.py and 1 other location - About 2 hrs to fix
scripts/hash_sizes.py on lines 82..86

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 53.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    while i < end_p1:
        grow_factor = get_grow_factor(i)
        p = find_bigger_prime(gen, i)
        sizes.append(p)
        i = int(i * grow_factor)
Severity: Major
Found in scripts/hash_sizes.py and 1 other location - About 2 hrs to fix
scripts/hash_sizes.py on lines 89..93

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 53.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

File key_test.py has 259 lines of code (exceeds 250 allowed). Consider refactoring.
Open

import tempfile
from binascii import a2b_base64
from unittest.mock import MagicMock

import pytest
Severity: Minor
Found in src/borg/testsuite/key_test.py - About 2 hrs to fix

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

        with open(os.path.join(archiver.input_path, "large_file"), "wb") as fd:
            fd.write(b"a" * 280)
            fd.write(b"b" * 280)
    Severity: Major
    Found in src/borg/testsuite/archiver/recreate_cmd_test.py and 1 other location - About 2 hrs to fix
    src/borg/testsuite/archiver/list_cmd_test.py on lines 38..40

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 52.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

        with open(os.path.join(archiver.input_path, "two_chunks"), "wb") as fd:
            fd.write(b"abba" * 2000000)
            fd.write(b"baab" * 2000000)
    Severity: Major
    Found in src/borg/testsuite/archiver/list_cmd_test.py and 1 other location - About 2 hrs to fix
    src/borg/testsuite/archiver/recreate_cmd_test.py on lines 137..139

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 52.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function yes has 17 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def yes(
    Severity: Major
    Found in src/borg/helpers/yes_no.py - About 2 hrs to fix

      File transfer_cmd.py has 255 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      import argparse
      
      from ._common import with_repository, with_other_repository, Highlander
      from ..archive import Archive
      from ..compress import CompressionSpec
      Severity: Minor
      Found in src/borg/archiver/transfer_cmd.py - About 2 hrs to fix

        Function os_open has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

        def os_open(*, flags, path=None, parent_fd=None, name=None, noatime=False):
            """
            Use os.open to open a fs item.
        
            If parent_fd and name are given, they are preferred and openat will be used,
        Severity: Minor
        Found in src/borg/helpers/fs.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function _translate_alternatives has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

        def _translate_alternatives(pat):
            """Translates the shell-style alternative portions of the pattern to regular expression groups.
        
            For example: {alt1,alt2} -> (alt1|alt2)
            """
        Severity: Minor
        Found in src/borg/helpers/shellpattern.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function get_item_uid_gid has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

        def get_item_uid_gid(item, *, numeric, uid_forced=None, gid_forced=None, uid_default=0, gid_default=0):
            if uid_forced is not None:
                uid = uid_forced
            else:
                uid = None if numeric else user2uid(item.get("user"))
        Severity: Minor
        Found in src/borg/archive.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function check_free_space has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

            def check_free_space(self):
                """Pre-commit check for sufficient free space necessary to perform the commit."""
                # As a baseline we take four times the current (on-disk) index size.
                # At this point the index may only be updated by compaction, which won't resize it.
                # We still apply a factor of four so that a later, separate invocation can free space
        Severity: Minor
        Found in src/borg/legacyrepository.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function get_tar_filter has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

        def get_tar_filter(fname, decompress):
            # Note that filter is None if fname is '-'.
            if fname.endswith((".tar.gz", ".tgz")):
                filter = "gzip -d" if decompress else "gzip"
            elif fname.endswith((".tar.bz2", ".tbz")):
        Severity: Minor
        Found in src/borg/archiver/tar_cmds.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function get_all_parsers has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

        def get_all_parsers():
            # Return dict mapping command to parser.
            parser = Archiver(prog="borg").build_parser()
            borgfs_parser = Archiver(prog="borgfs").build_parser()
            parsers = {}
        Severity: Minor
        Found in src/borg/testsuite/archiver/help_cmd_test.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function do_repo_compress has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

            def do_repo_compress(self, args, repository, manifest, cache):
                """Repository (re-)compression"""
        
                def get_csettings(c):
                    if isinstance(c, Auto):
        Severity: Minor
        Found in src/borg/archiver/repo_compress_cmd.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function are_acls_working has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

        def are_acls_working():
            with unopened_tempfile() as filepath:
                open(filepath, "w").close()
                try:
                    if is_darwin:
        Severity: Minor
        Found in src/borg/testsuite/platform_test.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function _write_files_cache has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

            def _write_files_cache(self, files):
                """write files cache to cache directory"""
                max_time_ns = 2**63 - 1  # nanoseconds, good until y2262
                # _self._newest_cmtime might be None if it was never set because no files were modified/added.
                newest_cmtime = self._newest_cmtime if self._newest_cmtime is not None else max_time_ns
        Severity: Minor
        Found in src/borg/cache.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Similar blocks of code found in 2 locations. Consider refactoring.
        Open

            def test_all_at_front(self):
                self.compare_compact("*DEEED")
                self.compare_compact("**DEED")
                self.compare_compact("***EED")
                self.compare_compact("****ED")
        Severity: Major
        Found in src/borg/testsuite/hashindex_test.py and 1 other location - About 2 hrs to fix
        src/borg/testsuite/hashindex_test.py on lines 359..364

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 51.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Similar blocks of code found in 2 locations. Consider refactoring.
        Open

            def test_all_at_back(self):
                self.compare_compact("EDEEE*")
                self.compare_compact("DEDE**")
                self.compare_compact("DED***")
                self.compare_compact("ED****")
        Severity: Major
        Found in src/borg/testsuite/hashindex_test.py and 1 other location - About 2 hrs to fix
        src/borg/testsuite/hashindex_test.py on lines 352..357

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 51.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        File logger.py has 252 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        """logging facilities
        
        The way to use this is as follows:
        
        * each module declares its own logger, using:
        Severity: Minor
        Found in src/borg/logger.py - About 2 hrs to fix

          Similar blocks of code found in 4 locations. Consider refactoring.
          Open

              if archiver.FORK_DEFAULT:
                  expected_ec = NotABorgKeyFile().exit_code
                  cmd(archiver, "key", "import", export_file, exit_code=expected_ec)
              else:
                  with pytest.raises(NotABorgKeyFile):
          Severity: Major
          Found in src/borg/testsuite/archiver/key_cmds_test.py and 3 other locations - About 2 hrs to fix
          src/borg/testsuite/archiver/key_cmds_test.py on lines 173..178
          src/borg/testsuite/archiver/key_cmds_test.py on lines 198..203
          src/borg/testsuite/archiver/key_cmds_test.py on lines 218..223

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 50.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Severity
          Category
          Status
          Source
          Language