ManageIQ/manageiq-consumption

View on GitHub

Showing 22 of 22 total issues

Method generateInfrastructure has a Cognitive Complexity of 19 (exceeds 11 allowed). Consider refactoring.
Open

def generateInfrastructure
  date_seed = DateTime.now + 5.hours
  for i in 0..PROVIDERS.to_i do
    ext = ExtManagementSystem.new
    ext.name = "Provider_#{i}"
Severity: Minor
Found in docs/benchmarks/benchmark_events.rb - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Method rate has a Cognitive Complexity of 13 (exceeds 11 allowed). Consider refactoring.
Open

    def rate(event, cycle_duration = nil)
      # Find tier (use context)
      # Calculate value within tier
      # For each tier used, calculate costs
      value, groupment = event.get_group(group, field) # Returns group and the unit
Severity: Minor
Found in app/models/manageiq/showback/rate.rb - About 35 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Method check_envelope_state has a Cognitive Complexity of 13 (exceeds 11 allowed). Consider refactoring.
Open

    def check_envelope_state
      case state_was
      when 'OPEN' then
        raise _("Envelope can't change state to CLOSED from OPEN") unless state != 'CLOSED'
        # s_time = (self.start_time + 1.months).beginning_of_month # This is never used
Severity: Minor
Found in app/models/manageiq/showback/envelope.rb - About 35 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Avoid parameter lists longer than 5 parameters. [6/5]
Open

    def occurrence(tier, value, _group, _time_span, cycle_duration, date)

Checks for methods with too many parameters.

The maximum number of parameters is configurable. Keyword arguments can optionally be excluded from the total count, as they add less complexity than positional or optional parameters.

Any number of arguments for initialize method inside a block of Struct.new and Data.define like this is always allowed:

Struct.new(:one, :two, :three, :four, :five, keyword_init: true) do
  def initialize(one:, two:, three:, four:, five:)
  end
end

This is because checking the number of arguments of the initialize method does not make sense.

NOTE: Explicit block argument &block is not counted to prevent erroneous change that is avoided by making block argument implicit.

Example: Max: 3

# good
def foo(a, b, c = 1)
end

Example: Max: 2

# bad
def foo(a, b, c = 1)
end

Example: CountKeywordArgs: true (default)

# counts keyword args towards the maximum

# bad (assuming Max is 3)
def foo(a, b, c, d: 1)
end

# good (assuming Max is 3)
def foo(a, b, c: 1)
end

Example: CountKeywordArgs: false

# don't count keyword args towards the maximum

# good (assuming Max is 3)
def foo(a, b, c, d: 1)
end

This cop also checks for the maximum number of optional parameters. This can be configured using the MaxOptionalParameters config option.

Example: MaxOptionalParameters: 3 (default)

# good
def foo(a = 1, b = 2, c = 3)
end

Example: MaxOptionalParameters: 2

# bad
def foo(a = 1, b = 2, c = 3)
end

Avoid parameter lists longer than 5 parameters. [6/5]
Open

    def rate_with_values(tier, value, group, time_span, cycle_duration, date = Time.current)

Checks for methods with too many parameters.

The maximum number of parameters is configurable. Keyword arguments can optionally be excluded from the total count, as they add less complexity than positional or optional parameters.

Any number of arguments for initialize method inside a block of Struct.new and Data.define like this is always allowed:

Struct.new(:one, :two, :three, :four, :five, keyword_init: true) do
  def initialize(one:, two:, three:, four:, five:)
  end
end

This is because checking the number of arguments of the initialize method does not make sense.

NOTE: Explicit block argument &block is not counted to prevent erroneous change that is avoided by making block argument implicit.

Example: Max: 3

# good
def foo(a, b, c = 1)
end

Example: Max: 2

# bad
def foo(a, b, c = 1)
end

Example: CountKeywordArgs: true (default)

# counts keyword args towards the maximum

# bad (assuming Max is 3)
def foo(a, b, c, d: 1)
end

# good (assuming Max is 3)
def foo(a, b, c: 1)
end

Example: CountKeywordArgs: false

# don't count keyword args towards the maximum

# good (assuming Max is 3)
def foo(a, b, c, d: 1)
end

This cop also checks for the maximum number of optional parameters. This can be configured using the MaxOptionalParameters config option.

Example: MaxOptionalParameters: 3 (default)

# good
def foo(a = 1, b = 2, c = 3)
end

Example: MaxOptionalParameters: 2

# bad
def foo(a = 1, b = 2, c = 3)
end

Avoid parameter lists longer than 5 parameters. [6/5]
Open

    def duration(tier, value, group, time_span, cycle_duration, date)

Checks for methods with too many parameters.

The maximum number of parameters is configurable. Keyword arguments can optionally be excluded from the total count, as they add less complexity than positional or optional parameters.

Any number of arguments for initialize method inside a block of Struct.new and Data.define like this is always allowed:

Struct.new(:one, :two, :three, :four, :five, keyword_init: true) do
  def initialize(one:, two:, three:, four:, five:)
  end
end

This is because checking the number of arguments of the initialize method does not make sense.

NOTE: Explicit block argument &block is not counted to prevent erroneous change that is avoided by making block argument implicit.

Example: Max: 3

# good
def foo(a, b, c = 1)
end

Example: Max: 2

# bad
def foo(a, b, c = 1)
end

Example: CountKeywordArgs: true (default)

# counts keyword args towards the maximum

# bad (assuming Max is 3)
def foo(a, b, c, d: 1)
end

# good (assuming Max is 3)
def foo(a, b, c: 1)
end

Example: CountKeywordArgs: false

# don't count keyword args towards the maximum

# good (assuming Max is 3)
def foo(a, b, c, d: 1)
end

This cop also checks for the maximum number of optional parameters. This can be configured using the MaxOptionalParameters config option.

Example: MaxOptionalParameters: 3 (default)

# good
def foo(a = 1, b = 2, c = 3)
end

Example: MaxOptionalParameters: 2

# bad
def foo(a = 1, b = 2, c = 3)
end

Avoid parameter lists longer than 5 parameters. [6/5]
Open

    def quantity(tier, value, group, _time_span, cycle_duration, date)

Checks for methods with too many parameters.

The maximum number of parameters is configurable. Keyword arguments can optionally be excluded from the total count, as they add less complexity than positional or optional parameters.

Any number of arguments for initialize method inside a block of Struct.new and Data.define like this is always allowed:

Struct.new(:one, :two, :three, :four, :five, keyword_init: true) do
  def initialize(one:, two:, three:, four:, five:)
  end
end

This is because checking the number of arguments of the initialize method does not make sense.

NOTE: Explicit block argument &block is not counted to prevent erroneous change that is avoided by making block argument implicit.

Example: Max: 3

# good
def foo(a, b, c = 1)
end

Example: Max: 2

# bad
def foo(a, b, c = 1)
end

Example: CountKeywordArgs: true (default)

# counts keyword args towards the maximum

# bad (assuming Max is 3)
def foo(a, b, c, d: 1)
end

# good (assuming Max is 3)
def foo(a, b, c: 1)
end

Example: CountKeywordArgs: false

# don't count keyword args towards the maximum

# good (assuming Max is 3)
def foo(a, b, c, d: 1)
end

This cop also checks for the maximum number of optional parameters. This can be configured using the MaxOptionalParameters config option.

Example: MaxOptionalParameters: 3 (default)

# good
def foo(a = 1, b = 2, c = 3)
end

Example: MaxOptionalParameters: 2

# bad
def foo(a = 1, b = 2, c = 3)
end

Method validate_interval has a Cognitive Complexity of 12 (exceeds 11 allowed). Consider refactoring.
Open

    def validate_interval
      raise _("Start value of interval is greater than end value") unless tier_start_value < tier_end_value
      ManageIQ::Showback::Tier.where(:rate => rate).each do |tier|
        # Returns true == overlap, false == no overlap
        next unless self != tier
Severity: Minor
Found in app/models/manageiq/showback/tier.rb - About 25 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Avoid parameter lists longer than 5 parameters. [6/5]
Open

    def calculate_list_of_costs_input(resource_type:,
                                      data:,
                                      context: nil,
                                      start_time: nil,
                                      end_time: nil,

Checks for methods with too many parameters.

The maximum number of parameters is configurable. Keyword arguments can optionally be excluded from the total count, as they add less complexity than positional or optional parameters.

Any number of arguments for initialize method inside a block of Struct.new and Data.define like this is always allowed:

Struct.new(:one, :two, :three, :four, :five, keyword_init: true) do
  def initialize(one:, two:, three:, four:, five:)
  end
end

This is because checking the number of arguments of the initialize method does not make sense.

NOTE: Explicit block argument &block is not counted to prevent erroneous change that is avoided by making block argument implicit.

Example: Max: 3

# good
def foo(a, b, c = 1)
end

Example: Max: 2

# bad
def foo(a, b, c = 1)
end

Example: CountKeywordArgs: true (default)

# counts keyword args towards the maximum

# bad (assuming Max is 3)
def foo(a, b, c, d: 1)
end

# good (assuming Max is 3)
def foo(a, b, c: 1)
end

Example: CountKeywordArgs: false

# don't count keyword args towards the maximum

# good (assuming Max is 3)
def foo(a, b, c, d: 1)
end

This cop also checks for the maximum number of optional parameters. This can be configured using the MaxOptionalParameters config option.

Example: MaxOptionalParameters: 3 (default)

# good
def foo(a = 1, b = 2, c = 3)
end

Example: MaxOptionalParameters: 2

# bad
def foo(a = 1, b = 2, c = 3)
end

Similar blocks of code found in 2 locations. Consider refactoring.
Open

  def FLAVOR_cpu_reserved
    numcpus = resource.class.name.ends_with?("Container") ? resource.vim_performance_states.last.state_data[:numvcpus] : resource.try(:cpu_total_cores) || 0
    update_value_flavor("cores", [numcpus, "cores"])
Severity: Minor
Found in app/models/manageiq/showback/data_rollup/flavor.rb and 1 other location - About 15 mins to fix
app/models/manageiq/showback/data_rollup/flavor.rb on lines 13..15

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 25.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

def generateHardware(n)
  h = Hardware.new
  h.cpu_sockets = n
  h.cpu_cores_per_socket = n%2
  h.cpu_total_cores = n
Severity: Minor
Found in docs/benchmarks/benchmark_events.rb and 1 other location - About 15 mins to fix
docs/demo/seed_play_data.rb on lines 3..10

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 25.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

  def FLAVOR_memory_reserved
    tmem = resource.class.name.ends_with?("Container") ? resource.vim_performance_states.last.state_data[:total_mem] : resource.try(:ram_size) || 0
    update_value_flavor("memory", [tmem, "Mb"])
Severity: Minor
Found in app/models/manageiq/showback/data_rollup/flavor.rb and 1 other location - About 15 mins to fix
app/models/manageiq/showback/data_rollup/flavor.rb on lines 5..7

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 25.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

def generateHardware(n)
  h = Hardware.new
  h.cpu_sockets = n
  h.cpu_cores_per_socket = n%2
  h.cpu_total_cores = n
Severity: Minor
Found in docs/demo/seed_play_data.rb and 1 other location - About 15 mins to fix
docs/benchmarks/benchmark_events.rb on lines 21..28

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 25.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Do not suppress exceptions.
Open

rescue LoadError
Severity: Minor
Found in Rakefile by rubocop

Checks for rescue blocks with no body.

Example:

# bad
def some_method
  do_something
rescue
end

# bad
begin
  do_something
rescue
end

# good
def some_method
  do_something
rescue
  handle_exception
end

# good
begin
  do_something
rescue
  handle_exception
end

Example: AllowComments: true (default)

# good
def some_method
  do_something
rescue
  # do nothing
end

# good
begin
  do_something
rescue
  # do nothing
end

Example: AllowComments: false

# bad
def some_method
  do_something
rescue
  # do nothing
end

# bad
begin
  do_something
rescue
  # do nothing
end

Example: AllowNil: true (default)

# good
def some_method
  do_something
rescue
  nil
end

# good
begin
  do_something
rescue
  # do nothing
end

# good
do_something rescue nil

Example: AllowNil: false

# bad
def some_method
  do_something
rescue
  nil
end

# bad
begin
  do_something
rescue
  nil
end

# bad
do_something rescue nil

metadata['rubygems_mfa_required'] must be set to 'true'.
Open

Gem::Specification.new do |spec|
  spec.name          = "manageiq-consumption"
  spec.version       = ManageIQ::Showback::VERSION
  spec.authors       = ["ManageIQ Authors"]

Severity: Minor
Found in manageiq-consumption.gemspec by rubocop

Requires a gemspec to have rubygems_mfa_required metadata set.

This setting tells RubyGems that MFA (Multi-Factor Authentication) is required for accounts to be able perform privileged operations, such as (see RubyGems' documentation for the full list of privileged operations):

  • gem push
  • gem yank
  • gem owner --add/remove
  • adding or removing owners using gem ownership page

This helps make your gem more secure, as users can be more confident that gem updates were pushed by maintainers.

Example:

# bad
Gem::Specification.new do |spec|
  # no `rubygems_mfa_required` metadata specified
end

# good
Gem::Specification.new do |spec|
  spec.metadata = {
    'rubygems_mfa_required' => 'true'
  }
end

# good
Gem::Specification.new do |spec|
  spec.metadata['rubygems_mfa_required'] = 'true'
end

# bad
Gem::Specification.new do |spec|
  spec.metadata = {
    'rubygems_mfa_required' => 'false'
  }
end

# good
Gem::Specification.new do |spec|
  spec.metadata = {
    'rubygems_mfa_required' => 'true'
  }
end

# bad
Gem::Specification.new do |spec|
  spec.metadata['rubygems_mfa_required'] = 'false'
end

# good
Gem::Specification.new do |spec|
  spec.metadata['rubygems_mfa_required'] = 'true'
end

Avoid more than 3 levels of block nesting.
Open

        if i%2==0
          v.tags=Tag.where(:name => "/managed/location/ny")
        else
          v.tags=Tag.where(:name => "/managed/location/chicago")
        end
Severity: Minor
Found in docs/benchmarks/benchmark_events.rb by rubocop

Checks for excessive nesting of conditional and looping constructs.

You can configure if blocks are considered using the CountBlocks option. When set to false (the default) blocks are not counted towards the nesting level. Set to true to count blocks as well.

The maximum level of nesting allowed is configurable.

Wrap expressions with varying precedence with parentheses to avoid ambiguity.
Open

      fix_inter * tier.fixed_rate + (value ? var_inter * tier.variable_rate : 0) # fixed always, variable if value

Looks for expressions containing multiple binary operators where precedence is ambiguous due to lack of parentheses. For example, in 1 + 2 * 3, the multiplication will happen before the addition, but lexically it appears that the addition will happen first.

The cop does not consider unary operators (ie. !a or -b) or comparison operators (ie. a =~ b) because those are not ambiguous.

NOTE: Ranges are handled by Lint/AmbiguousRange.

Example:

# bad
a + b * c
a || b && c
a ** b + c

# good (different precedence)
a + (b * c)
a || (b && c)
(a ** b) + c

# good (same precedence)
a + b + c
a * b / c % d

Wrap expressions with varying precedence with parentheses to avoid ambiguity.
Open

      starts_with_zero? && value.zero? || value > tier_start_value && value.to_f <= tier_end_value

Looks for expressions containing multiple binary operators where precedence is ambiguous due to lack of parentheses. For example, in 1 + 2 * 3, the multiplication will happen before the addition, but lexically it appears that the addition will happen first.

The cop does not consider unary operators (ie. !a or -b) or comparison operators (ie. a =~ b) because those are not ambiguous.

NOTE: Ranges are handled by Lint/AmbiguousRange.

Example:

# bad
a + b * c
a || b && c
a ** b + c

# good (different precedence)
a + (b * c)
a || (b && c)
(a ** b) + c

# good (same precedence)
a + b + c
a * b / c % d

Wrap expressions with varying precedence with parentheses to avoid ambiguity.
Open

      starts_with_zero? && value.zero? || value > tier_start_value && value.to_f <= tier_end_value

Looks for expressions containing multiple binary operators where precedence is ambiguous due to lack of parentheses. For example, in 1 + 2 * 3, the multiplication will happen before the addition, but lexically it appears that the addition will happen first.

The cop does not consider unary operators (ie. !a or -b) or comparison operators (ie. a =~ b) because those are not ambiguous.

NOTE: Ranges are handled by Lint/AmbiguousRange.

Example:

# bad
a + b * c
a || b && c
a ** b + c

# good (different precedence)
a + (b * c)
a || (b && c)
(a ** b) + c

# good (same precedence)
a + b + c
a * b / c % d

Wrap expressions with varying precedence with parentheses to avoid ambiguity.
Open

      ((value * data_rollup_days + @metrics.average(:cpu_usage_rate_average)) / (data_rollup_days + 1))

Looks for expressions containing multiple binary operators where precedence is ambiguous due to lack of parentheses. For example, in 1 + 2 * 3, the multiplication will happen before the addition, but lexically it appears that the addition will happen first.

The cop does not consider unary operators (ie. !a or -b) or comparison operators (ie. a =~ b) because those are not ambiguous.

NOTE: Ranges are handled by Lint/AmbiguousRange.

Example:

# bad
a + b * c
a || b && c
a ** b + c

# good (different precedence)
a + (b * c)
a || (b && c)
(a ** b) + c

# good (same precedence)
a + b + c
a * b / c % d
Severity
Category
Status
Source
Language