pylhc/turn_by_turn

View on GitHub

Showing 3 of 9 total issues

Function read_tbt has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
Open

def read_tbt(file_path: Union[str, Path], bunch_id: int = None) -> TbtData:
    """
    Reads turn-by-turn data from an ASCII turn-by-turn format file, and return the date as well as
    parsed matrices for construction of a ``TbtData`` object.

Severity: Minor
Found in turn_by_turn/ascii.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function write_tbt has 5 arguments (exceeds 4 allowed). Consider refactoring.
Open

def write_tbt(output_path: Union[str, Path], tbt_data: TbtData, noise: float = None, seed: int = None, datatype: str = "lhc") -> None:
Severity: Minor
Found in turn_by_turn/io.py - About 35 mins to fix

    Function add_noise has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
    Open

    def add_noise(data: np.ndarray, noise: float = None, sigma: float = None, seed: int = None) -> np.ndarray:
        """
        Returns the given data with added noise. Noise is generated as a standard normal distribution (mean=0,
        standard_deviation=1) with the size of the input data, and scaled by the a factor before being added to
        the provided data. Said factor can either be provided, or calculated from the input data's own standard
    Severity: Minor
    Found in turn_by_turn/utils.py - About 25 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Severity
    Category
    Status
    Source
    Language