Maroc-OS/decompiler

View on GitHub

Showing 2,968 of 2,968 total issues

Function should_rename has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
Open

  def should_rename(self, expr):
    if type(expr.parent) is deref_t:
      return False

    in_phi = type(expr.parent) is phi_t
Severity: Minor
Found in src/renamer.py - About 45 mins to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Avoid deeply nested control flow statements.
Open

            if type(dest) != value_t:
              print '%x: cannot follow jump to %s' % (ea, repr(dest))
              continue

Severity: Major
Found in src/graph.py - About 45 mins to fix

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

            for stmt in start.container[:self.use_stmt.index()+1]:
              stmts.append(stmt)
    Severity: Minor
    Found in src/ssa.py and 1 other location - About 45 mins to fix
    src/ssa.py on lines 297..298

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 35.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

          for stmt in end.container[:self.use_stmt.index()+1]:
            stmts.append(stmt)
    Severity: Minor
    Found in src/ssa.py and 1 other location - About 45 mins to fix
    src/ssa.py on lines 284..285

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 35.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 3 locations. Consider refactoring.
    Open

          expr = assign_t(self.stackreg.copy(), add_t(self.stackreg.copy(), value_t(4, self.address_size)))
    Severity: Major
    Found in src/ir/intel.py and 2 other locations - About 40 mins to fix
    src/ir/intel.py on lines 264..264
    src/ir/intel.py on lines 287..287

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 34.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 3 locations. Consider refactoring.
    Open

          expr = assign_t(self.stackreg.copy(), sub_t(self.stackreg.copy(), value_t(4, self.address_size)))
    Severity: Major
    Found in src/ir/intel.py and 2 other locations - About 40 mins to fix
    src/ir/intel.py on lines 271..271
    src/ir/intel.py on lines 287..287

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 34.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

      @property
      def jump_from(self):
        """ return a list of blocks where `block` leads to, based on gotos in `block` """
        return [self.function.blocks[ea] for ea in self.jump_from_ea]
    Severity: Minor
    Found in src/decompiler.py and 1 other location - About 40 mins to fix
    src/decompiler.py on lines 39..42

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 34.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 3 locations. Consider refactoring.
    Open

          expr = assign_t(self.stackreg.copy(), add_t(self.stackreg.copy(), value_t(4, self.address_size)))
    Severity: Major
    Found in src/ir/intel.py and 2 other locations - About 40 mins to fix
    src/ir/intel.py on lines 264..264
    src/ir/intel.py on lines 271..271

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 34.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

      @property
      def jump_to(self):
        """ return a list of blocks where `block` leads to, based on gotos in `block` """
        return [self.function.blocks[ea] for ea in self.jump_to_ea]
    Severity: Minor
    Found in src/decompiler.py and 1 other location - About 40 mins to fix
    src/decompiler.py on lines 58..61

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 34.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
    Open

      def __init__(self, op1, operator1, op2, operator2, op3):
    Severity: Minor
    Found in src/expressions.py - About 35 mins to fix

      Function process has 5 arguments (exceeds 4 allowed). Consider refactoring.
      Open

        def process(self, function, ssa_tagger, block, stmt, call):
      Severity: Minor
      Found in src/callconv.py - About 35 mins to fix

        Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
        Open

          def __init__(self, function, depth_first=False, ltr=True, filter=None, klass=None):
        Severity: Minor
        Found in src/iterators.py - About 35 mins to fix

          Function visit has 5 arguments (exceeds 4 allowed). Consider refactoring.
          Open

            def visit(function, block, conds, visited, context, priors):
          Severity: Minor
          Found in src/filters/controlflow.py - About 35 mins to fix

            Function assemble_connected has 5 arguments (exceeds 4 allowed). Consider refactoring.
            Open

              def assemble_connected(self, container, blocks, block, prioritizer=None, exclude=[]):
            Severity: Minor
            Found in src/filters/controlflow.py - About 35 mins to fix

              Function combine_conditions has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

                def combine_conditions(cls, block):
                  """ combine two ifs into a boolean or (||) or a boolean and (&&). """
              
                  if not cls.is_branch_block(block):
                    return False
              Severity: Minor
              Found in src/filters/controlflow.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function add_sub has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

              def add_sub(expr):
                """ Simplify nested math expressions when the second operand of
                    each expression is a number literal.
              
                (a +/- n1) +/- n2 => (a +/- n3) with n3 = n1 +/- n2
              Severity: Minor
              Found in src/filters/simplify_expressions.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function update has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

                def update(self, function):
              
                  self.function = function
              
                  t = c.tokenizer(function)
              Severity: Minor
              Found in src/host/ida/ui/browser.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function jump_to_ea has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

                def jump_to_ea(self):
                  """ generates a list of address where `block` leads to, based on gotos and branches in `block` """
                  for stmt in statement_iterator_t(self.function):
                    if stmt.container.block != self:
                      continue
              Severity: Minor
              Found in src/decompiler.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function unlink has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

                def unlink(self):
                  if isinstance(self, assignable_t):
                    assignable_t.unlink(self)
                  else:
                    for op in self.iteroperands():
              Severity: Minor
              Found in src/expressions.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function __iter__ has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
              Open

                def __iter__(self):
                  for expr in expression_iterator_t(self.function):
                    for op in expr.iteroperands(self.depth_first, self.ltr):
                      if self.filter is None or self.filter(op):
                        yield op
              Severity: Minor
              Found in src/iterators.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Severity
              Category
              Status
              Source
              Language