Showing 611 of 611 total issues
File archive.py
has 1968 lines of code (exceeds 250 allowed). Consider refactoring. Open
import base64
import errno
import json
import os
import stat
File legacyrepository.py
has 1437 lines of code (exceeds 250 allowed). Consider refactoring. Open
import errno
import mmap
import os
import shutil
import stat
Function call_many
has a Cognitive Complexity of 172 (exceeds 5 allowed). Consider refactoring. Open
def call_many(self, cmd, calls, wait=True, is_preloaded=False, async_wait=True):
if not calls and cmd != "async_responses":
return
def send_buffer():
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function call_many
has a Cognitive Complexity of 171 (exceeds 5 allowed). Consider refactoring. Open
def call_many(self, cmd, calls, wait=True, is_preloaded=False, async_wait=True):
if not calls and cmd != "async_responses":
return
def send_buffer():
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Similar blocks of code found in 2 locations. Consider refactoring. Open
def test_AE(self):
# used in legacy-like layout (1 type byte, no aad)
key = b"X" * 32
iv_int = 0
data = b"foo" * 10
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 359.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
Similar blocks of code found in 2 locations. Consider refactoring. Open
def test_AEAD(self):
# test with aad
key = b"X" * 32
iv_int = 0
data = b"foo" * 10
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 359.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
Similar blocks of code found in 2 locations. Consider refactoring. Open
def test_AES256_CTR_HMAC_SHA256(self):
# this tests the layout as in borg < 1.2 (1 type byte, no aad)
mac_key = b"Y" * 32
enc_key = b"X" * 32
iv = 0
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 328.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
Similar blocks of code found in 2 locations. Consider refactoring. Open
def test_AES256_CTR_HMAC_SHA256_aad(self):
mac_key = b"Y" * 32
enc_key = b"X" * 32
iv = 0
data = b"foo" * 10
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 328.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
File helpers_test.py
has 1196 lines of code (exceeds 250 allowed). Consider refactoring. Open
import base64
import errno
import getpass
import hashlib
import os
File remote.py
has 1061 lines of code (exceeds 250 allowed). Consider refactoring. Open
import atexit
import errno
import functools
import inspect
import logging
Function do_create
has a Cognitive Complexity of 127 (exceeds 5 allowed). Consider refactoring. Open
def do_create(self, args, repository, manifest):
"""Create new archive"""
key = manifest.key
matcher = PatternMatcher(fallback=True)
matcher.add_inclexcl(args.patterns)
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function rebuild_archives
has a Cognitive Complexity of 125 (exceeds 5 allowed). Consider refactoring. Open
def rebuild_archives(
self, first=0, last=0, sort_by="", match=None, older=None, newer=None, oldest=None, newest=None
):
"""Analyze and rebuild archives, expecting some damage and trying to make stuff consistent again."""
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
File parseformat.py
has 991 lines of code (exceeds 250 allowed). Consider refactoring. Open
import abc
import argparse
import base64
import binascii
import hashlib
File create_cmd.py
has 851 lines of code (exceeds 250 allowed). Consider refactoring. Open
import errno
import sys
import argparse
import logging
import os
File legacyrepository_test.py
has 832 lines of code (exceeds 250 allowed). Consider refactoring. Open
import logging
import os
import sys
from typing import Optional
from unittest.mock import patch
Function do_transfer
has a Cognitive Complexity of 91 (exceeds 5 allowed). Consider refactoring. Open
def do_transfer(self, args, *, repository, manifest, cache, other_repository=None, other_manifest=None):
"""archives transfer from other repository, optionally upgrade data format"""
key = manifest.key
other_key = other_manifest.key
if not uses_same_id_hash(other_key, key):
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
File legacyremote.py
has 763 lines of code (exceeds 250 allowed). Consider refactoring. Open
import errno
import functools
import inspect
import logging
import os
File create_cmd_test.py
has 751 lines of code (exceeds 250 allowed). Consider refactoring. Open
import errno
import json
import os
import tempfile
import shutil
File key.py
has 750 lines of code (exceeds 250 allowed). Consider refactoring. Open
import binascii
import hmac
import os
import textwrap
from hashlib import sha256, pbkdf2_hmac
Function extract_item
has a Cognitive Complexity of 87 (exceeds 5 allowed). Consider refactoring. Open
def extract_item(
self,
item,
*,
restore_attrs=True,
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"