Showing 103 of 103 total issues
File pal_base.py
has 607 lines of code (exceeds 250 allowed). Consider refactoring. Open
# -*- coding: utf-8 -*-
# pylint:disable=anomalous-backslash-in-string
# Copyright 2020 PyePAL authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
PALBase
has 42 functions (exceeds 20 allowed). Consider refactoring. Open
class PALBase: # pylint:disable=too-many-instance-attributes, too-many-public-methods
"""PAL base class"""
def __init__( # pylint:disable=too-many-arguments
self,
File __init__.py
has 374 lines of code (exceeds 250 allowed). Consider refactoring. Open
# -*- coding: utf-8 -*-
# Copyright 2020 PyePAL authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Function _pareto_classify
has a Cognitive Complexity of 30 (exceeds 5 allowed). Consider refactoring. Open
def _pareto_classify( # pylint:disable=too-many-arguments, too-many-locals, too-many-branches
pareto_optimal_0: np.array,
not_pareto_optimal_0: np.array,
unclassified_0: np.array,
rectangle_lows: np.array,
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function validate_goals
has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring. Open
def validate_goals(goals: Any, ndim: int) -> np.ndarray: # pylint:disable=too-many-branches
"""Create a valid array of goals. 1 for maximization, -1
for objectives that are to be minimized.
Args:
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
File validate_inputs.py
has 329 lines of code (exceeds 250 allowed). Consider refactoring. Open
# -*- coding: utf-8 -*-
# Copyright 2020 PyePAL authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
File core.py
has 293 lines of code (exceeds 250 allowed). Consider refactoring. Open
# -*- coding: utf-8 -*-
# pylint:disable=anomalous-backslash-in-string
# Copyright 2020 PyePAL authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
Similar blocks of code found in 2 locations. Consider refactoring. Open
def __init__(self, *args, **kwargs):
"""Construct the PALSklearn instance
Args:
X_design (np.array): Design space (feature matrix)
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 56.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
Similar blocks of code found in 2 locations. Consider refactoring. Open
def __init__(self, *args, **kwargs):
"""Construct the PALGBDT instance
Args:
X_design (np.array): Design space (feature matrix)
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 56.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
Function validate_epsilon
has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring. Open
def validate_epsilon(epsilon: Any, ndim: int) -> np.ndarray:
"""Validate epsilon and return a np.array
Args:
epsilon (Any): Epsilon hyperparameter
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
File utils.py
has 257 lines of code (exceeds 250 allowed). Consider refactoring. Open
# -*- coding: utf-8 -*-
# Copyright 2020 PyePAL authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Function recursive_hypervolume
has a Cognitive Complexity of 15 (exceeds 5 allowed). Consider refactoring. Open
def recursive_hypervolume(self, dimension: int) -> float:
"""Recursive hypervolume computation. The algorithm is provided by Algorithm 3.
of the original paper."""
if self.multilist.chain_length(dimension - 1) == 0:
return 0
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function plot_jointplot
has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring. Open
def plot_jointplot( # pylint:disable=invalid-name
y: np.array,
palinstance: PALBase,
labels: Union[List[str], None] = None,
figsize: tuple = (8.0, 6.0),
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function _get_max_wt
has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring. Open
def _get_max_wt( # pylint:disable=too-many-arguments
rectangle_lows: np.array,
rectangle_ups: np.array,
means: np.array,
pareto_optimal_t: np.array,
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function __init__
has 10 arguments (exceeds 4 allowed). Consider refactoring. Open
def __init__( # pylint:disable=too-many-arguments
Function validate_gbdt_models
has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring. Open
def validate_gbdt_models(models: Any, ndim: int) -> List[Iterable]:
"""Make sure that the number of iterables is equal to the number of objectives
and that every iterable contains three LGBMRegressors.
Also, we check that at least the first and last models use quantile loss"""
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function _ensemble_train_one_finite_width
has 10 arguments (exceeds 4 allowed). Consider refactoring. Open
def _ensemble_train_one_finite_width( # pylint:disable=too-many-arguments, too-many-locals
Function _get_max_wt
has 9 arguments (exceeds 4 allowed). Consider refactoring. Open
def _get_max_wt( # pylint:disable=too-many-arguments
Function __init__
has 9 arguments (exceeds 4 allowed). Consider refactoring. Open
def __init__( # pylint:disable=too-many-arguments
Function _get_max_wt_all
has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring. Open
def _get_max_wt_all( # pylint:disable=too-many-arguments
rectangle_lows: np.array,
rectangle_ups: np.array,
means: np.array,
sampled: np.array,
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"