borgbackup/borg

View on GitHub

Showing 611 of 611 total issues

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    def test_reuse_after_add_chunk(self, cache):
        assert cache.add_chunk(H(3), {}, b"5678", stats=Statistics()) == (H(3), 4)
        assert cache.reuse_chunk(H(3), 4, Statistics()) == (H(3), 4)
Severity: Major
Found in src/borg/testsuite/cache_test.py and 1 other location - About 3 hrs to fix
src/borg/testsuite/cache_test.py on lines 47..49

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 71.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    def test_existing_reuse_after_add_chunk(self, cache):
        assert cache.add_chunk(H(1), {}, b"5678", stats=Statistics()) == (H(1), 4)
        assert cache.reuse_chunk(H(1), 4, Statistics()) == (H(1), 4)
Severity: Major
Found in src/borg/testsuite/cache_test.py and 1 other location - About 3 hrs to fix
src/borg/testsuite/cache_test.py on lines 43..45

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 71.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Repository has 30 functions (exceeds 20 allowed). Consider refactoring.
Open

class Repository:
    """borgstore based key value store"""

    class AlreadyExists(Error):
        """A repository already exists at {}."""
Severity: Minor
Found in src/borg/repository.py - About 3 hrs to fix

    LegacyRemoteRepository has 30 functions (exceeds 20 allowed). Consider refactoring.
    Open

    class LegacyRemoteRepository:
        extra_test_args = []  # type: ignore
    
        class RPCError(Exception):
            def __init__(self, unpacked):
    Severity: Minor
    Found in src/borg/legacyremote.py - About 3 hrs to fix

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

              if progress:
                  pi = ProgressIndicatorPercent(msg="%5.1f%% Processing: %s", step=0.1, msgid="extract")
                  pi.output("Calculating size")
                  extracted_size = sum(item.get_size() for item in archive.iter_items(filter))
                  pi.total = extracted_size
      Severity: Major
      Found in src/borg/archiver/tar_cmds.py and 1 other location - About 3 hrs to fix
      src/borg/archiver/extract_cmd.py on lines 51..57

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 70.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

              if progress:
                  pi = ProgressIndicatorPercent(msg="%5.1f%% Extracting: %s", step=0.1, msgid="extract")
                  pi.output("Calculating total archive size for the progress indicator (might take long for large archives)")
                  extracted_size = sum(item.get_size() for item in archive.iter_items(filter))
                  pi.total = extracted_size
      Severity: Major
      Found in src/borg/archiver/extract_cmd.py and 1 other location - About 3 hrs to fix
      src/borg/archiver/tar_cmds.py on lines 104..110

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 70.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      File mount_cmds_test.py has 317 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      import errno
      import os
      import stat
      import sys
      
      
      Severity: Minor
      Found in src/borg/testsuite/archiver/mount_cmds_test.py - About 3 hrs to fix

        Function unpack_many has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
        Open

            def unpack_many(self, ids, *, filter=None, preload=False):
                """
                Return iterator of items.
        
                *ids* is a chunk ID list of an item stream. *filter* is a callable
        Severity: Minor
        Found in src/borg/archive.py - About 3 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function verify_data has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
        Open

            def verify_data(self):
                logger.info("Starting cryptographic data integrity verification...")
                chunks_count = len(self.chunks)
                errors = 0
                defect_chunks = []
        Severity: Minor
        Found in src/borg/archive.py - About 3 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function iter_archive_items has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
        Open

            def iter_archive_items(self, archive_item_ids, filter=None):
                unpacker = msgpack.Unpacker()
        
                # Current offset in the metadata stream, which consists of all metadata chunks glued together
                stream_offset = 0
        Severity: Minor
        Found in src/borg/fuse.py - About 3 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Identical blocks of code found in 2 locations. Consider refactoring.
        Open

                self.assert_equal(
                    parts, [data[0:123], data[123 : 123 + 4096], data[123 + 4096 : 123 + 8192], data[123 + 8192 :]]
        Severity: Major
        Found in src/borg/testsuite/chunker_test.py and 1 other location - About 3 hrs to fix
        src/borg/testsuite/chunker_test.py on lines 54..55

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 69.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Identical blocks of code found in 2 locations. Consider refactoring.
        Open

                self.assert_equal(
                    parts, [data[0:123], data[123 : 123 + 4096], data[123 + 4096 : 123 + 8192], data[123 + 8192 :]]
        Severity: Major
        Found in src/borg/testsuite/chunker_test.py and 1 other location - About 3 hrs to fix
        src/borg/testsuite/chunker_test.py on lines 38..39

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 69.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        File fslocking_test.py has 311 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        import random
        import time
        from threading import Thread, Lock as ThreadingLock
        from traceback import format_exc
        
        
        Severity: Minor
        Found in src/borg/testsuite/fslocking_test.py - About 3 hrs to fix

          Function create_filter_process has a Cognitive Complexity of 24 (exceeds 5 allowed). Consider refactoring.
          Open

          def create_filter_process(cmd, stream, stream_close, inbound=True):
              if cmd:
                  # put a filter process between stream and us (e.g. a [de]compression command)
                  # inbound: <stream> --> filter --> us
                  # outbound: us --> filter --> <stream>
          Severity: Minor
          Found in src/borg/helpers/process.py - About 3 hrs to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

              def test_got_exclusive_lock(self, lockstore):
                  lock = Lock(lockstore, exclusive=True, id=ID1)
                  assert not lock.got_exclusive_lock()
                  lock.acquire()
                  assert lock.got_exclusive_lock()
          Severity: Major
          Found in src/borg/testsuite/storelocking_test.py and 1 other location - About 3 hrs to fix
          src/borg/testsuite/fslocking_test.py on lines 244..250

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 66.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

              def test_got_exclusive_lock(self, lockpath):
                  lock = Lock(lockpath, exclusive=True, id=ID1)
                  assert not lock.got_exclusive_lock()
                  lock.acquire()
                  assert lock.got_exclusive_lock()
          Severity: Major
          Found in src/borg/testsuite/fslocking_test.py and 1 other location - About 3 hrs to fix
          src/borg/testsuite/storelocking_test.py on lines 29..35

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 66.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          File hashindex_test.py has 302 lines of code (exceeds 250 allowed). Consider refactoring.
          Open

          # Note: these tests are part of the self test, do not use or import pytest functionality here.
          #       See borg.selftest for details. If you add/remove test methods, update SELFTEST_COUNT
          
          import base64
          import hashlib
          Severity: Minor
          Found in src/borg/testsuite/hashindex_test.py - About 3 hrs to fix

            Function borg_cmd has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
            Open

                def borg_cmd(self, args, testing):
                    """return a borg serve command line"""
                    # give some args/options to 'borg serve' process as they were given to us
                    opts = []
                    if args is not None:
            Severity: Minor
            Found in src/borg/remote.py - About 3 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function borg_cmd has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
            Open

                def borg_cmd(self, args, testing):
                    """return a borg serve command line"""
                    # give some args/options to 'borg serve' process as they were given to us
                    opts = []
                    if args is not None:
            Severity: Minor
            Found in src/borg/legacyremote.py - About 3 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function _assert_dirs_equal_cmp has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
            Open

            def _assert_dirs_equal_cmp(diff, ignore_flags=False, ignore_xattrs=False, ignore_ns=False):
                assert diff.left_only == []
                assert diff.right_only == []
                assert diff.diff_files == []
                assert diff.funny_files == []
            Severity: Minor
            Found in src/borg/testsuite/archiver/__init__.py - About 3 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Severity
            Category
            Status
            Source
            Language