Showing 159 of 159 total issues

Function process_archive has a Cognitive Complexity of 33 (exceeds 5 allowed). Consider refactoring.
Open

    def process_archive(self, archive, prefix=[]):
        """Build fuse inode hierarchy from archive metadata
        """
        unpacker = msgpack.Unpacker()
        for key, chunk in zip(archive.metadata[b'items'], self.repository.get_many(archive.metadata[b'items'])):
Severity: Minor
Found in attic/fuse.py - About 4 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    def test_crash_before_compact_segments(self):
        self.add_keys()
        self.repository.compact_segments = None
        try:
            self.repository.commit()
Severity: Major
Found in attic/testsuite/repository.py and 1 other location - About 4 hrs to fix
attic/testsuite/repository.py on lines 163..172

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 78.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    def test_crash_before_write_index(self):
        self.add_keys()
        self.repository.write_index = None
        try:
            self.repository.commit()
Severity: Major
Found in attic/testsuite/repository.py and 1 other location - About 4 hrs to fix
attic/testsuite/repository.py on lines 141..150

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 78.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function restore_attrs has a Cognitive Complexity of 26 (exceeds 5 allowed). Consider refactoring.
Open

    def restore_attrs(self, path, item, symlink=False, fd=None):
        xattrs = item.get(b'xattrs')
        if xattrs:
                for k, v in xattrs.items():
                    try:
Severity: Minor
Found in attic/archive.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function __next__ has a Cognitive Complexity of 26 (exceeds 5 allowed). Consider refactoring.
Open

    def __next__(self):
        if self._resync:
            data = b''.join(self._buffered_data)
            while self._resync:
                if not data:
Severity: Minor
Found in attic/archive.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function run has 92 lines of code (exceeds 25 allowed). Consider refactoring.
Open

    def run(self, args=None):
        check_extension_modules()
        keys_dir = get_keys_dir()
        if not os.path.exists(keys_dir):
            os.makedirs(keys_dir)
Severity: Major
Found in attic/archiver.py - About 3 hrs to fix

    Function do_extract has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
    Open

        def do_extract(self, args):
            """Extract archive contents"""
            # be restrictive when restoring files, restore permissions later
            if sys.getfilesystemencoding() == 'ascii':
                print('Warning: File system encoding is "ascii", extracting non-ascii filenames will not be supported.')
    Severity: Minor
    Found in attic/archiver.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function replay_segments has a Cognitive Complexity of 24 (exceeds 5 allowed). Consider refactoring.
    Open

        def replay_segments(self, index_transaction_id, segments_transaction_id):
            self.prepare_txn(index_transaction_id, do_cleanup=False)
            for segment, filename in self.io.segment_iterator():
                if index_transaction_id is not None and segment <= index_transaction_id:
                    continue
    Severity: Minor
    Found in attic/repository.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function do_list has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
    Open

        def do_list(self, args):
            """List archive or repository contents"""
            repository = self.open_repository(args.src)
            manifest, key = Manifest.load(repository)
            if args.src.archive:
    Severity: Minor
    Found in attic/archiver.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Identical blocks of code found in 2 locations. Consider refactoring.
    Open

            try:
                self.attic('mount', self.repository_location, mountpoint, fork=True)
                self.wait_for_mount(mountpoint)
                self.assert_dirs_equal(self.input_path, os.path.join(mountpoint, 'archive', 'input'))
                self.assert_dirs_equal(self.input_path, os.path.join(mountpoint, 'archive2', 'input'))
    Severity: Major
    Found in attic/testsuite/archiver.py and 1 other location - About 3 hrs to fix
    attic/testsuite/archiver.py on lines 392..403

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 62.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Identical blocks of code found in 2 locations. Consider refactoring.
    Open

            try:
                self.attic('mount', self.repository_location + '::archive', mountpoint, fork=True)
                self.wait_for_mount(mountpoint)
                self.assert_dirs_equal(self.input_path, os.path.join(mountpoint, 'input'))
            finally:
    Severity: Major
    Found in attic/testsuite/archiver.py and 1 other location - About 3 hrs to fix
    attic/testsuite/archiver.py on lines 371..383

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 62.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function _process has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
    Open

        def _process(self, archive, cache, excludes, exclude_caches, skip_inodes, path, restrict_dev):
            if exclude_path(path, excludes):
                return
            try:
                st = os.lstat(path)
    Severity: Minor
    Found in attic/archiver.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function serve has a Cognitive Complexity of 20 (exceeds 5 allowed). Consider refactoring.
    Open

        def serve(self):
            stdin_fd = sys.stdin.fileno()
            stdout_fd = sys.stdout.fileno()
            # Make stdin non-blocking
            fl = fcntl.fcntl(stdin_fd, fcntl.F_GETFL)
    Severity: Minor
    Found in attic/remote.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    ArchiverTestCase has 24 functions (exceeds 20 allowed). Consider refactoring.
    Open

    class ArchiverTestCase(ArchiverTestCaseBase):
    
        def create_regular_file(self, name, size=0, contents=None):
            filename = os.path.join(self.input_path, name)
            if not os.path.exists(os.path.dirname(filename)):
    Severity: Minor
    Found in attic/testsuite/archiver.py - About 2 hrs to fix

      Function compact_segments has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
      Open

          def compact_segments(self):
              """Compact sparse segments by copying data into new segments
              """
              if not self.compact:
                  return
      Severity: Minor
      Found in attic/repository.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function sync has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
      Open

          def sync(self):
              """Initializes cache by fetching and reading all archive indicies
              """
              def add(id, size, csize):
                  try:
      Severity: Minor
      Found in attic/cache.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      File key.py has 271 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      from binascii import hexlify, a2b_base64, b2a_base64
      from getpass import getpass
      import os
      import msgpack
      import textwrap
      Severity: Minor
      Found in attic/key.py - About 2 hrs to fix

        File repository.py has 268 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        import os
        import shutil
        import tempfile
        from attic.testsuite.mock import patch
        from attic.hashindex import NSIndex
        Severity: Minor
        Found in attic/testsuite/repository.py - About 2 hrs to fix

          File remote.py has 265 lines of code (exceeds 250 allowed). Consider refactoring.
          Open

          import errno
          import fcntl
          import msgpack
          import os
          import select
          Severity: Minor
          Found in attic/remote.py - About 2 hrs to fix

            Repository has 22 functions (exceeds 20 allowed). Consider refactoring.
            Open

            class Repository(object):
                """Filesystem based transactional key value store
            
                On disk layout:
                dir/README
            Severity: Minor
            Found in attic/repository.py - About 2 hrs to fix
              Severity
              Category
              Status
              Source
              Language