Showing 547 of 589 total issues
Function _load_data_from_tree
has a Cognitive Complexity of 322 (exceeds 5 allowed). Consider refactoring. Open
def _load_data_from_tree(index, prefix, ws, key, tree, hash_name):
from dvc_data.index import DataIndexEntry, Meta
parents = set()
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
File output.py
has 1218 lines of code (exceeds 250 allowed). Consider refactoring. Open
import errno
import os
import posixpath
from collections import defaultdict
from contextlib import suppress
Similar blocks of code found in 2 locations. Consider refactoring. Open
@pytest.mark.xfail(raises=NotImplementedError, strict=False)
def test_pull_no_00_prefix(self, tmp_dir, dvc, remote, monkeypatch):
# Related: https://github.com/iterative/dvc/issues/6244
fs_type = type(dvc.cloud.get_remote_odb("upstream").fs)
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 252.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
Similar blocks of code found in 2 locations. Consider refactoring. Open
@pytest.mark.xfail(raises=NotImplementedError, strict=False)
def test_pull_00_prefix(self, tmp_dir, dvc, remote, monkeypatch):
# Related: https://github.com/iterative/dvc/issues/6089
fs_type = type(dvc.cloud.get_remote_odb("upstream").fs)
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 252.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
File index.py
has 737 lines of code (exceeds 250 allowed). Consider refactoring. Open
import logging
import time
from collections import defaultdict
from collections.abc import Iterable, Iterator
from functools import partial
File base.py
has 707 lines of code (exceeds 250 allowed). Consider refactoring. Open
import logging
import os
import pickle
import shutil
from abc import ABC, abstractmethod
File base.py
has 594 lines of code (exceeds 250 allowed). Consider refactoring. Open
import os
from abc import ABC, abstractmethod
from collections.abc import Collection, Generator, Iterable, Mapping
from dataclasses import asdict, dataclass
from typing import TYPE_CHECKING, Any, NamedTuple, Optional, Union
File dvc.py
has 594 lines of code (exceeds 250 allowed). Consider refactoring. Open
import errno
import functools
import ntpath
import os
import posixpath
File celery.py
has 531 lines of code (exceeds 250 allowed). Consider refactoring. Open
import hashlib
import locale
import logging
import os
from collections import defaultdict
File __init__.py
has 474 lines of code (exceeds 250 allowed). Consider refactoring. Open
import logging
import os
from collections.abc import Mapping, Sequence
from copy import deepcopy
from itertools import product
File __init__.py
has 466 lines of code (exceeds 250 allowed). Consider refactoring. Open
import csv
import io
import os
from collections import defaultdict
from collections.abc import Iterator
File context.py
has 440 lines of code (exceeds 250 allowed). Consider refactoring. Open
from abc import ABC, abstractmethod
from collections import defaultdict
from collections.abc import Mapping, MutableMapping, MutableSequence, Sequence
from contextlib import contextmanager
from copy import deepcopy
File data_sync.py
has 400 lines of code (exceeds 250 allowed). Consider refactoring. Open
import argparse
from dvc.cli import completion, formatter
from dvc.cli.command import CmdBase
from dvc.cli.utils import append_doc_link
LocalCeleryQueue
has 39 functions (exceeds 20 allowed). Consider refactoring. Open
class LocalCeleryQueue(BaseStashQueue):
"""DVC experiment queue.
Maps queued experiments to (Git) stash reflog entries.
"""
Function match_defs_renderers
has a Cognitive Complexity of 34 (exceeds 5 allowed). Consider refactoring. Open
def match_defs_renderers( # noqa: C901, PLR0912
data,
out=None,
templates_dir: Optional["StrPath"] = None,
) -> list[RendererWithErrors]:
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function dumpd
has a Cognitive Complexity of 34 (exceeds 5 allowed). Consider refactoring. Open
def dumpd(self, **kwargs): # noqa: C901, PLR0912
from dvc.cachemgr import LEGACY_HASH_NAMES
ret: dict[str, Any] = {}
with_files = (
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
File compare.py
has 350 lines of code (exceeds 250 allowed). Consider refactoring. Open
from collections import abc
from collections.abc import (
ItemsView,
Iterable,
Iterator,
File ignore.py
has 348 lines of code (exceeds 250 allowed). Consider refactoring. Open
import os
import re
from collections import namedtuple
from itertools import chain, groupby, takewhile
from typing import TYPE_CHECKING, Optional
Experiments
has 34 functions (exceeds 20 allowed). Consider refactoring. Open
class Experiments:
"""Class that manages experiments in a DVC repo.
Args:
repo (dvc.repo.Repo): repo instance that these experiments belong to.
Output
has 34 functions (exceeds 20 allowed). Consider refactoring. Open
class Output:
IS_DEPENDENCY = False
PARAM_PATH = "path"
PARAM_CACHE = "cache"