SimonBlanke/Gradient-Free-Optimizers

View on GitHub

Showing 48 of 50 total issues

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    @TimesTracker.iter_time
    def _iteration(self):
        self.best_score = self.p_bar.score_best

        pos_new = self.iterate()
Severity: Major
Found in gradient_free_optimizers/search.py and 1 other location - About 1 day to fix
gradient_free_optimizers/search.py on lines 35..52

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 123.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    @TimesTracker.iter_time
    def _initialization(self):
        self.best_score = self.p_bar.score_best

        init_pos = self.init_pos()
Severity: Major
Found in gradient_free_optimizers/search.py and 1 other location - About 1 day to fix
gradient_free_optimizers/search.py on lines 54..71

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 123.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function plot_search_paths has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
Open

def plot_search_paths(
    path,
    optimizer,
    opt_para,
    n_iter_max,
Severity: Minor
Found in docs/search_path_gif.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function evaluate has a Cognitive Complexity of 20 (exceeds 5 allowed). Consider refactoring.
Open

    def evaluate(self, score_new):
        if self.simplex_step != 0:
            self.prev_pos = self.positions_valid[-1]

        if self.simplex_step == 1:
Severity: Minor
Found in gradient_free_optimizers/optimizers/local_opt/downhill_simplex.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

SearchTracker has 21 functions (exceeds 20 allowed). Consider refactoring.
Open

class SearchTracker:
    def __init__(self):
        super().__init__()

        self._pos_new = None
Severity: Minor
Found in gradient_free_optimizers/optimizers/core_optimizer/search_tracker.py - About 2 hrs to fix

    Identical blocks of code found in 2 locations. Consider refactoring.
    Open

            if self.pos_best is None:
                self.pos_best = self.pos_new
                self.pos_current = self.pos_new
    
                self.score_best = score_new
    gradient_free_optimizers/optimizers/base_optimizer.py on lines 18..24

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 44.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Identical blocks of code found in 2 locations. Consider refactoring.
    Open

        def evaluate(self, score_new):
            if self.pos_best is None:
                self.pos_best = self.pos_new
                self.pos_current = self.pos_new
    
    
    Severity: Major
    Found in gradient_free_optimizers/optimizers/base_optimizer.py and 1 other location - About 1 hr to fix
    gradient_free_optimizers/optimizers/global_opt/direct_algorithm.py on lines 147..152

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 44.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function iterate has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
    Open

        def iterate(self):
            # while loop for constraint opt
            while True:
                # If this is the first iteration:
                # Generate the direction and return initial_position
    Severity: Minor
    Found in gradient_free_optimizers/optimizers/grid/grid_search.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function search_path_gif has 10 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def search_path_gif(
    Severity: Major
    Found in docs/search_path_gif.py - About 1 hr to fix

      Similar blocks of code found in 4 locations. Consider refactoring.
      Open

              if "vertices" in self.initialize:
                  positions = self._init_vertices(self.initialize["vertices"])
                  init_positions_ll.append(positions)
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 52..54
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 55..57
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 61..63

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 40.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 4 locations. Consider refactoring.
      Open

              if "random" in self.initialize:
                  positions = self._init_random_search(self.initialize["random"])
                  init_positions_ll.append(positions)
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 55..57
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 58..60
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 61..63

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 40.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 4 locations. Consider refactoring.
      Open

              if "warm_start" in self.initialize:
                  positions = self._init_warm_start(self.initialize["warm_start"])
                  init_positions_ll.append(positions)
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 52..54
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 55..57
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 58..60

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 40.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 4 locations. Consider refactoring.
      Open

              if "grid" in self.initialize:
                  positions = self._init_grid_search(self.initialize["grid"])
                  init_positions_ll.append(positions)
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 52..54
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 58..60
      gradient_free_optimizers/optimizers/core_optimizer/init_positions.py on lines 61..63

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 40.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Function plot_search_paths has 9 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def plot_search_paths(
      Severity: Major
      Found in docs/search_path_gif.py - About 1 hr to fix

        Function iterate has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
        Open

            def iterate(self):
                while True:
                    self.current_subspace = self.select_next_subspace()
                    if self.current_subspace:
                        pos = self.current_subspace.center_pos
        Severity: Minor
        Found in gradient_free_optimizers/optimizers/global_opt/direct_algorithm.py - About 1 hr to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function _init_vertices has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
        Open

            def _init_vertices(self, n_pos):
                positions = []
                for _ in range(n_pos):
                    for _ in range(100):
                        vertex = self._get_random_vertex()

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function print_info has 8 arguments (exceeds 4 allowed). Consider refactoring.
        Open

        def print_info(
        Severity: Major
        Found in gradient_free_optimizers/print_info.py - About 1 hr to fix

          Function search has 8 arguments (exceeds 4 allowed). Consider refactoring.
          Open

              def search(
          Severity: Major
          Found in gradient_free_optimizers/search.py - About 1 hr to fix

            Function init_search has 8 arguments (exceeds 4 allowed). Consider refactoring.
            Open

                def init_search(
            Severity: Major
            Found in gradient_free_optimizers/search.py - About 1 hr to fix

              Function __init__ has 8 arguments (exceeds 4 allowed). Consider refactoring.
              Open

                  def __init__(
              Severity: Major
              Found in gradient_free_optimizers/optimizers/exp_opt/ensemble_optimizer.py - About 1 hr to fix
                Severity
                Category
                Status
                Source
                Language