torvalds/linux

View on GitHub

Showing 1,485 of 1,485 total issues

File kunit_parser.py has 639 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# SPDX-License-Identifier: GPL-2.0
#
# Parses KTAP test results from a kernel dmesg log and incrementally prints
# results with reader-friendly format. Stores and returns test results in a
# Test object.
Severity: Major
Found in tools/testing/kunit/kunit_parser.py - About 1 day to fix

    File tpm2.py has 608 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    # SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
    
    import hashlib
    import os
    import socket
    Severity: Major
    Found in tools/testing/selftests/tpm2/tpm2.py - About 1 day to fix

      Function parse_maintainers has a Cognitive Complexity of 67 (exceeds 5 allowed). Consider refactoring.
      Open

          def parse_maintainers(self, path):
              """Parse all the MAINTAINERS lines into ReST for human-readability"""
      
              result = list()
              result.append(".. _maintainers:")
      Severity: Minor
      Found in Documentation/sphinx/maintainers_include.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

          if opt == '-h':
              print()
              sys.exit()
          elif opt in ("-t", "--trace_file"):
              valid1 = True
      Severity: Major
      Found in tools/power/x86/amd_pstate_tracer/amd_pstate_trace.py and 1 other location - About 1 day to fix
      tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py on lines 519..535

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 152.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

              if opt == '-h':
                  print_help('intel_pstate')
                  sys.exit()
              elif opt in ("-t", "--trace_file"):
                  valid1 = True
      Severity: Major
      Found in tools/power/x86/intel_pstate_tracer/intel_pstate_tracer.py and 1 other location - About 1 day to fix
      tools/power/x86/amd_pstate_tracer/amd_pstate_trace.py on lines 272..288

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 152.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Function getFPDT has a Cognitive Complexity of 64 (exceeds 5 allowed). Consider refactoring.
      Open

      def getFPDT(output):
          rectype = {}
          rectype[0] = 'Firmware Basic Boot Performance Record'
          rectype[1] = 'S3 Performance Table Record'
          prectype = {}
      Severity: Minor
      Found in tools/power/pm-graph/sleepgraph.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

      def get_portpools(ports, verify_existence=False):
          d = run_json_cmd("devlink sb port pool -j -n")
          portpools = PortPoolList()
          for port in ports:
              err_msg = None
      tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py on lines 249..261

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 146.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

      def get_tcbinds(ports, verify_existence=False):
          d = run_json_cmd("devlink sb tc bind show -j -n")
          tcbinds = TcBindList()
          for port in ports:
              err_msg = None
      tools/testing/selftests/drivers/net/mlxsw/sharedbuffer_configuration.py on lines 321..333

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 146.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Function __init__ has a Cognitive Complexity of 63 (exceeds 5 allowed). Consider refactoring.
      Open

        def __init__(self, jd: dict):
          """Constructor passed the dictionary of parsed json values."""
      
          def llx(x: int) -> str:
            """Convert an int to a string similar to a printf modifier of %#llx."""
      Severity: Minor
      Found in tools/perf/pmu-events/jevents.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

              if self.params.have_ipc:
                  insn_pcnt = PercentToOneDP(insn_cnt, parent_item.insn_cnt)
                  cyc_pcnt = PercentToOneDP(cyc_cnt, parent_item.cyc_cnt)
                  br_pcnt = PercentToOneDP(branch_count, parent_item.branch_count)
                  ipc = CalcIPC(cyc_cnt, insn_cnt)
      Severity: Major
      Found in tools/perf/scripts/python/exported-sql-viewer.py and 1 other location - About 1 day to fix
      tools/perf/scripts/python/exported-sql-viewer.py on lines 575..582

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 143.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

              if self.params.have_ipc:
                  insn_pcnt = PercentToOneDP(insn_cnt, parent_item.insn_cnt)
                  cyc_pcnt = PercentToOneDP(cyc_cnt, parent_item.cyc_cnt)
                  br_pcnt = PercentToOneDP(branch_count, parent_item.branch_count)
                  ipc = CalcIPC(cyc_cnt, insn_cnt)
      Severity: Major
      Found in tools/perf/scripts/python/exported-sql-viewer.py and 1 other location - About 1 day to fix
      tools/perf/scripts/python/exported-sql-viewer.py on lines 857..864

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 143.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Function parseTraceLog has a Cognitive Complexity of 61 (exceeds 5 allowed). Consider refactoring.
      Open

      def parseTraceLog(data):
          sysvals.vprint('Analyzing the ftrace data (%s)...' % \
              os.path.basename(sysvals.ftracefile))
          # if available, calculate cgfilter allowable ranges
          cgfilter = []
      Severity: Minor
      Found in tools/power/pm-graph/bootgraph.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function CopyTreeCellsToClipboard has a Cognitive Complexity of 60 (exceeds 5 allowed). Consider refactoring.
      Open

      def CopyTreeCellsToClipboard(view, as_csv=False, with_hdr=False):
          indexes = view.selectedIndexes()
          if not len(indexes):
              return
      
      
      Severity: Minor
      Found in tools/perf/scripts/python/exported-sql-viewer.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function main has a Cognitive Complexity of 58 (exceeds 5 allowed). Consider refactoring.
      Open

      def main(argv):
          nlmsg_atoms.ovskey = ovskey
          nlmsg_atoms.ovsactions = ovsactions
      
          # version check for pyroute2
      Severity: Minor
      Found in tools/testing/selftests/net/openvswitch/ovs-dpctl.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function getPhaseRows has a Cognitive Complexity of 58 (exceeds 5 allowed). Consider refactoring.
      Open

          def getPhaseRows(self, devlist, row=0, sortby='length'):
              # clear all rows and set them to undefined
              remaining = len(devlist)
              rowdata = dict()
              sortdict = dict()
      Severity: Minor
      Found in tools/power/pm-graph/sleepgraph.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function _load_nested_sets has a Cognitive Complexity of 57 (exceeds 5 allowed). Consider refactoring.
      Open

          def _load_nested_sets(self):
              attr_set_queue = list(self.root_sets.keys())
              attr_set_seen = set(self.root_sets.keys())
      
              while len(attr_set_queue):
      Severity: Minor
      Found in tools/net/ynl/ynl-gen-c.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

      def rb_first(root):
          if root.type == rb_root_type.get_type():
              node = root.address.cast(rb_root_type.get_type().pointer())
          elif root.type != rb_root_type.get_type().pointer():
              raise gdb.GdbError("Must be struct rb_root not {}".format(root.type))
      Severity: Major
      Found in scripts/gdb/linux/rbtree.py and 1 other location - About 1 day to fix
      scripts/gdb/linux/rbtree.py on lines 29..42

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 129.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

      def rb_last(root):
          if root.type == rb_root_type.get_type():
              node = root.address.cast(rb_root_type.get_type().pointer())
          elif root.type != rb_root_type.get_type().pointer():
              raise gdb.GdbError("Must be struct rb_root not {}".format(root.type))
      Severity: Major
      Found in scripts/gdb/linux/rbtree.py and 1 other location - About 1 day to fix
      scripts/gdb/linux/rbtree.py on lines 13..26

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 129.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Function dmidecode has a Cognitive Complexity of 53 (exceeds 5 allowed). Consider refactoring.
      Open

      def dmidecode(mempath, fatal=False):
          out = dict()
      
          # the list of values to retrieve, with hardcoded (type, idx)
          info = {
      Severity: Minor
      Found in tools/power/pm-graph/sleepgraph.py - About 1 day to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Similar blocks of code found in 3 locations. Consider refactoring.
      Open

      class LxPageAddress(gdb.Command):
          """struct page to linear mapping address"""
      
          def __init__(self):
              super(LxPageAddress, self).__init__("lx-page_address", gdb.COMMAND_USER)
      Severity: Major
      Found in scripts/gdb/linux/mm.py and 2 other locations - About 1 day to fix
      scripts/gdb/linux/mm.py on lines 299..310
      scripts/gdb/linux/mm.py on lines 329..340

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 125.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Severity
      Category
      Status
      Source
      Language