ManageIQ/manageiq-providers-openstack

View on GitHub
app/models/manageiq/providers/openstack/base_metrics_capture.rb

Summary

Maintainability
B
5 hrs
Test Coverage
B
86%

Method process_multi_counter_stats! has a Cognitive Complexity of 34 (exceeds 11 allowed). Consider refactoring.
Open

  def process_multi_counter_stats!(counter_values_by_ts, metric_capture_module, i, timestamps, metrics_by_counter_name,
                                   data_collecting_period, log_header)
    # !!! This method modifies counter_values_by_ts
    # We have more counters in calculation. We have to make sure all counters have values present. It can
    # happen that data of related counters are not collected in the same 20s window. So we will try to collect
Severity: Minor
Found in app/models/manageiq/providers/openstack/base_metrics_capture.rb - About 4 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Cyclomatic complexity for process_multi_counter_stats! is too high. [22/11]
Open

  def process_multi_counter_stats!(counter_values_by_ts, metric_capture_module, i, timestamps, metrics_by_counter_name,
                                   data_collecting_period, log_header)
    # !!! This method modifies counter_values_by_ts
    # We have more counters in calculation. We have to make sure all counters have values present. It can
    # happen that data of related counters are not collected in the same 20s window. So we will try to collect

Checks that the cyclomatic complexity of methods is not higher than the configured maximum. The cyclomatic complexity is the number of linearly independent paths through a method. The algorithm counts decision points and adds one.

An if statement (or unless or ?:) increases the complexity by one. An else branch does not, since it doesn't add a decision point. The && operator (or keyword and) can be converted to a nested if statement, and ||/or is shorthand for a sequence of ifs, so they also add one. Loops can be said to have an exit condition, so they add one. Blocks that are calls to builtin iteration methods (e.g. `ary.map{...}) also add one, others are ignored.

def each_child_node(*types)               # count begins: 1
  unless block_given?                     # unless: +1
    return to_enum(__method__, *types)

  children.each do |child|                # each{}: +1
    next unless child.is_a?(Node)         # unless: +1

    yield child if types.empty? ||        # if: +1, ||: +1
                   types.include?(child.type)
  end

  self
end                                       # total: 6

Method find_meter_counters has a Cognitive Complexity of 13 (exceeds 11 allowed). Consider refactoring.
Open

  def find_meter_counters(metric_capture_module, resource_filter, metadata_filter, log_header)
    counters = self.class.counters_by_vm.dig(ems.id, target.ems_ref)
    if counters.nil?
      counters = list_resource_meters(resource_filter, log_header) + list_metadata_meters(metadata_filter, log_header)
      # With Gnocchi, the network metrics are not associated with the instance's resource id
Severity: Minor
Found in app/models/manageiq/providers/openstack/base_metrics_capture.rb - About 35 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Avoid parameter lists longer than 5 parameters. [7/5]
Open

  def process_multi_counter_stats!(counter_values_by_ts, metric_capture_module, i, timestamps, metrics_by_counter_name,
                                   data_collecting_period, log_header)

Checks for methods with too many parameters.

The maximum number of parameters is configurable. Keyword arguments can optionally be excluded from the total count, as they add less complexity than positional or optional parameters.

Any number of arguments for initialize method inside a block of Struct.new and Data.define like this is always allowed:

Struct.new(:one, :two, :three, :four, :five, keyword_init: true) do
  def initialize(one:, two:, three:, four:, five:)
  end
end

This is because checking the number of arguments of the initialize method does not make sense.

NOTE: Explicit block argument &block is not counted to prevent erroneous change that is avoided by making block argument implicit.

Example: Max: 3

# good
def foo(a, b, c = 1)
end

Example: Max: 2

# bad
def foo(a, b, c = 1)
end

Example: CountKeywordArgs: true (default)

# counts keyword args towards the maximum

# bad (assuming Max is 3)
def foo(a, b, c, d: 1)
end

# good (assuming Max is 3)
def foo(a, b, c: 1)
end

Example: CountKeywordArgs: false

# don't count keyword args towards the maximum

# good (assuming Max is 3)
def foo(a, b, c, d: 1)
end

This cop also checks for the maximum number of optional parameters. This can be configured using the MaxOptionalParameters config option.

Example: MaxOptionalParameters: 3 (default)

# good
def foo(a = 1, b = 2, c = 3)
end

Example: MaxOptionalParameters: 2

# bad
def foo(a = 1, b = 2, c = 3)
end

Avoid more than 3 levels of block nesting.
Open

              if r["original_resource_id"].include?(original_resource_id)
                resource_filter = {"field" => "resource_id", "value" => r["id"]}
                counters = counters + list_resource_meters(resource_filter, log_header)
              end

Checks for excessive nesting of conditional and looping constructs.

You can configure if blocks are considered using the CountBlocks option. When set to false (the default) blocks are not counted towards the nesting level. Set to true to count blocks as well.

The maximum level of nesting allowed is configurable.

Use #key? instead of #keys.include?.
Open

      if available_metric_services.keys.include? metric_service_from_settings

Similar blocks of code found in 2 locations. Consider refactoring.
Open

  def list_resource_meters(resource_filter, log_header)
    if resource_filter
      $log.debug "#{log_header} id:[#{target.name}] getting resource counters using resource filter: #{resource_filter}"
      counters = list_meters(resource_filter)
    else
Severity: Minor
Found in app/models/manageiq/providers/openstack/base_metrics_capture.rb and 1 other location - About 35 mins to fix
app/models/manageiq/providers/openstack/base_metrics_capture.rb on lines 141..149

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 34.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

  def list_metadata_meters(metadata_filter, log_header)
    if metadata_filter
      $log.debug "#{log_header} id:[#{target.name}] getting metadata counters using metadata filter: #{metadata_filter}"
      counters = list_meters(metadata_filter)
    else
Severity: Minor
Found in app/models/manageiq/providers/openstack/base_metrics_capture.rb and 1 other location - About 35 mins to fix
app/models/manageiq/providers/openstack/base_metrics_capture.rb on lines 130..138

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 34.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Avoid rescuing the Exception class. Perhaps you meant to rescue StandardError?
Open

    rescue Exception => err
      _log.error("#{log_header} Unhandled exception during perf data collection: [#{err}], class: [#{err.class}]")
      _log.error("#{log_header}   Timings at time of error: #{Benchmark.current_realtime.inspect}")
      _log.log_backtrace(err)
      raise

Checks for rescue blocks targeting the Exception class.

Example:

# bad

begin
  do_something
rescue Exception
  handle_exception
end

Example:

# good

begin
  do_something
rescue ArgumentError
  handle_exception
end

Avoid rescuing the Exception class. Perhaps you meant to rescue StandardError?
Open

      rescue Exception => ex
        $log.debug "#{_log.prefix} Gnocchi service connection failed on #{ex}, falling back to Ceilometer.."
        target.ext_management_system.connect(:service => "Metering")

Checks for rescue blocks targeting the Exception class.

Example:

# bad

begin
  do_something
rescue Exception
  handle_exception
end

Example:

# good

begin
  do_something
rescue ArgumentError
  handle_exception
end

There are no issues that match your filters.

Category
Status