cea-sec/miasm

View on GitHub
miasm/analysis/depgraph.py

Summary

Maintainability
D
2 days
Test Coverage

File depgraph.py has 518 lines of code (exceeds 250 allowed). Consider refactoring.
Open

"""Provide dependency graph"""

from functools import total_ordering

from future.utils import viewitems
Severity: Major
Found in miasm/analysis/depgraph.py - About 1 day to fix

    Function get has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
    Open

        def get(self, loc_key, elements, line_nb, heads):
            """Compute the dependencies of @elements at line number @line_nb in
            the block named @loc_key in the current IRCFG, before the execution of
            this line. Dependency check stop if one of @heads is reached
            @loc_key: LocKey instance
    Severity: Minor
    Found in miasm/analysis/depgraph.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function irblock_slice has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
    Open

        def irblock_slice(self, irb, max_line=None):
            """Slice of the dependency nodes on the irblock @irb
            @irb: irbloc instance
            """
    
    
    Severity: Minor
    Found in miasm/analysis/depgraph.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function visit_inner has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
    Open

        def visit_inner(self, expr, *args, **kwargs):
            if expr.is_id():
                self.follow.add(expr)
            elif expr.is_int():
                self.nofollow.add(expr)
    Severity: Minor
    Found in miasm/analysis/depgraph.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _gen_path_constraints has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

        def _gen_path_constraints(self, translator, expr, expected):
            """Generate path constraint from @expr. Handle special case with
            generated loc_keys
            """
            out = []
    Severity: Minor
    Found in miasm/analysis/depgraph.py - About 55 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
    Open

        def __init__(self, ircfg,
    Severity: Minor
    Found in miasm/analysis/depgraph.py - About 35 mins to fix

      Function emul has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
      Open

          def emul(self, lifter, ctx=None, step=False):
              # Init
              ctx_init = {}
              if ctx is not None:
                  ctx_init.update(ctx)
      Severity: Minor
      Found in miasm/analysis/depgraph.py - About 35 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function as_graph has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
      Open

          def as_graph(self):
              """Generates a Digraph of dependencies"""
              graph = DiGraph()
              for node_a, node_b in self.links:
                  if not node_b:
      Severity: Minor
      Found in miasm/analysis/depgraph.py - About 35 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function _track_exprs has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
      Open

          def _track_exprs(self, state, assignblk, line_nb):
              """Track pending expression in an assignblock"""
              future_pending = {}
              node_resolved = set()
              for dst, src in viewitems(assignblk):
      Severity: Minor
      Found in miasm/analysis/depgraph.py - About 25 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

          def __init__(self, follow_mem, follow_call):
              super(FilterExprSources, self).__init__(lambda x:None)
              self.follow_mem = follow_mem
              self.follow_call = follow_call
              self.nofollow = set()
      Severity: Major
      Found in miasm/analysis/depgraph.py and 1 other location - About 2 hrs to fix
      miasm/expression/expression.py on lines 286..291

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 53.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

                  if hist_nb == history_size and loc_key == self.initial_state.loc_key:
                      line_nb = self.initial_state.line_nb
                  else:
                      line_nb = None
      Severity: Major
      Found in miasm/analysis/depgraph.py and 1 other location - About 1 hr to fix
      miasm/analysis/depgraph.py on lines 303..306

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 40.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

                  if index == last_index and loc_key == self.initial_state.loc_key:
                      line_nb = self.initial_state.line_nb
                  else:
                      line_nb = None
      Severity: Major
      Found in miasm/analysis/depgraph.py and 1 other location - About 1 hr to fix
      miasm/analysis/depgraph.py on lines 377..380

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 40.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

              elif expr.is_function_call():
                  if self.follow_call:
                      self.follow.add(expr)
                  else:
                      self.nofollow.add(expr)
      Severity: Minor
      Found in miasm/analysis/depgraph.py and 1 other location - About 30 mins to fix
      miasm/analysis/depgraph.py on lines 479..484

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 32.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

              elif expr.is_mem():
                  if self.follow_mem:
                      self.follow.add(expr)
                  else:
                      self.nofollow.add(expr)
      Severity: Minor
      Found in miasm/analysis/depgraph.py and 1 other location - About 30 mins to fix
      miasm/analysis/depgraph.py on lines 485..490

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 32.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      There are no issues that match your filters.

      Category
      Status