cea-sec/miasm

View on GitHub

Showing 3,020 of 3,020 total issues

Function simp_flags has a Cognitive Complexity of 28 (exceeds 5 allowed). Consider refactoring.
Open

def simp_flags(_, expr):
    args = expr.args

    if expr.is_op("FLAG_EQ"):
        return ExprCond(args[0], ExprInt(0, 1), ExprInt(1, 1))
Severity: Minor
Found in miasm/expression/simplifications_explicit.py - About 4 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function simp_cmp_bijective_op has a Cognitive Complexity of 28 (exceeds 5 allowed). Consider refactoring.
Open

def simp_cmp_bijective_op(expr_simp, expr):
    """
    A + B == A => A == 0

    X + A == X + B => A == B
Severity: Minor
Found in miasm/expression/simplifications_common.py - About 4 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function order_ssa_var_dom has a Cognitive Complexity of 28 (exceeds 5 allowed). Consider refactoring.
Open

    def order_ssa_var_dom(self):
        """Compute dominance order of each ssa variable"""
        ircfg = self.ssa.graph

        # compute dominator tree
Severity: Minor
Found in miasm/analysis/outofssa.py - About 4 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function arm_guess_jump_table has a Cognitive Complexity of 28 (exceeds 5 allowed). Consider refactoring.
Open

def arm_guess_jump_table(dis_engine, cur_block, offsets_to_dis):
    arch = dis_engine.arch
    loc_db = dis_engine.loc_db
    lifter_model_call = get_lifter_model_call(arch, dis_engine.attrib)

Severity: Minor
Found in miasm/analysis/disasm_cb.py - About 4 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function _parse_body has a Cognitive Complexity of 28 (exceeds 5 allowed). Consider refactoring.
Open

    def _parse_body(self, body, argument_names):
        """Recursive function transforming a @body to a block expression
        Return:
         - AST to append to body (real python statements)
         - a list of blocks, ie list of affblock, ie list of ExprAssign (AST)"""
Severity: Minor
Found in miasm/core/sembuilder.py - About 4 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

def compare_expr_list(l1_e, l2_e):
    # Sort by list elements in incremental order, then by list size
    for i in range(min(len(l1_e), len(l2_e))):
        ret = compare_exprs(l1_e[i], l2_e[i])
        if ret:
Severity: Major
Found in miasm/expression/expression.py and 1 other location - About 4 hrs to fix
miasm/expression/expression.py on lines 1526..1532

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 75.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

def compare_expr_list_compose(l1_e, l2_e):
    # Sort by list elements in incremental order, then by list size
    for i in range(min(len(l1_e), len(l2_e))):
        ret = compare_exprs(l1_e[i], l2_e[i])
        if ret:
Severity: Major
Found in miasm/expression/expression.py and 1 other location - About 4 hrs to fix
miasm/expression/expression.py on lines 1535..1541

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 75.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

DebugCmd has 32 functions (exceeds 20 allowed). Consider refactoring.
Open

class DebugCmd(cmd.Cmd, object):

    "CommandLineInterpreter for Debugguer instance"

    color_g = '\033[92m'
Severity: Minor
Found in miasm/analysis/debugging.py - About 4 hrs to fix

    Function simp_cond has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
    Open

    def simp_cond(_, expr):
        """
        Common simplifications on ExprCond.
        Eval exprcond src1/src2 with satifiable/unsatisfiable condition propagation
        """
    Severity: Minor
    Found in miasm/expression/simplifications_common.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function may_interfer has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
    Open

        def may_interfer(self, dsts, src):
            """
            Return True if @src may interfere with expressions in @dsts
            @dsts: Set of Expressions
            @src: expression to test
    Severity: Minor
    Found in miasm/analysis/data_flow.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function vm_load_elf has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
    Open

    def vm_load_elf(vm, fdata, name="", base_addr=0, loc_db=None, apply_reloc=False,
                    **kargs):
        """
        Very dirty elf loader
        TODO XXX: implement real loader
    Severity: Minor
    Found in miasm/jitter/loader/elf.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 6 locations. Consider refactoring.
    Open

    class Arch_armtl(Arch):
        _ARCH_ = "armtl"
        STACK_SIZE = 0x100000
        STACK_BASE = 0x100000
    
    
    Severity: Major
    Found in miasm/analysis/sandbox.py and 5 other locations - About 3 hrs to fix
    miasm/analysis/sandbox.py on lines 417..428
    miasm/analysis/sandbox.py on lines 431..442
    miasm/analysis/sandbox.py on lines 459..470
    miasm/analysis/sandbox.py on lines 473..484
    miasm/analysis/sandbox.py on lines 487..498

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 73.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 4 locations. Consider refactoring.
    Open

        @classmethod
        def endian_offset(cls, attrib, offset):
            if attrib == "l":
                return (offset & ~3) + 3 - offset % 4
            elif attrib == "b":
    Severity: Major
    Found in miasm/arch/arm/arch.py and 3 other locations - About 3 hrs to fix
    miasm/arch/aarch64/arch.py on lines 544..551
    miasm/arch/arm/arch.py on lines 817..824
    miasm/arch/mips32/arch.py on lines 235..242

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 73.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 6 locations. Consider refactoring.
    Open

    class Arch_mips32b(Arch):
        _ARCH_ = "mips32b"
        STACK_SIZE = 0x100000
        STACK_BASE = 0x100000
    
    
    Severity: Major
    Found in miasm/analysis/sandbox.py and 5 other locations - About 3 hrs to fix
    miasm/analysis/sandbox.py on lines 417..428
    miasm/analysis/sandbox.py on lines 431..442
    miasm/analysis/sandbox.py on lines 445..456
    miasm/analysis/sandbox.py on lines 473..484
    miasm/analysis/sandbox.py on lines 487..498

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 73.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 4 locations. Consider refactoring.
    Open

        @classmethod
        def endian_offset(cls, attrib, offset):
            if attrib == "l":
                return (offset & ~3) + 3 - offset % 4
            elif attrib == "b":
    Severity: Major
    Found in miasm/arch/aarch64/arch.py and 3 other locations - About 3 hrs to fix
    miasm/arch/arm/arch.py on lines 716..723
    miasm/arch/arm/arch.py on lines 817..824
    miasm/arch/mips32/arch.py on lines 235..242

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 73.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 6 locations. Consider refactoring.
    Open

    class Arch_aarch64l(Arch):
        _ARCH_ = "aarch64l"
        STACK_SIZE = 0x100000
        STACK_BASE = 0x100000
    
    
    Severity: Major
    Found in miasm/analysis/sandbox.py and 5 other locations - About 3 hrs to fix
    miasm/analysis/sandbox.py on lines 417..428
    miasm/analysis/sandbox.py on lines 431..442
    miasm/analysis/sandbox.py on lines 445..456
    miasm/analysis/sandbox.py on lines 459..470
    miasm/analysis/sandbox.py on lines 487..498

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 73.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 6 locations. Consider refactoring.
    Open

    class Arch_arml(Arch):
        _ARCH_ = "arml"
        STACK_SIZE = 0x100000
        STACK_BASE = 0x100000
    
    
    Severity: Major
    Found in miasm/analysis/sandbox.py and 5 other locations - About 3 hrs to fix
    miasm/analysis/sandbox.py on lines 431..442
    miasm/analysis/sandbox.py on lines 445..456
    miasm/analysis/sandbox.py on lines 459..470
    miasm/analysis/sandbox.py on lines 473..484
    miasm/analysis/sandbox.py on lines 487..498

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 73.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 4 locations. Consider refactoring.
    Open

        @classmethod
        def endian_offset(cls, attrib, offset):
            if attrib == "l":
                return (offset & ~1) + 1 - offset % 2
            elif attrib == "b":
    Severity: Major
    Found in miasm/arch/arm/arch.py and 3 other locations - About 3 hrs to fix
    miasm/arch/aarch64/arch.py on lines 544..551
    miasm/arch/arm/arch.py on lines 716..723
    miasm/arch/mips32/arch.py on lines 235..242

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 73.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 6 locations. Consider refactoring.
    Open

    class Arch_armb(Arch):
        _ARCH_ = "armb"
        STACK_SIZE = 0x100000
        STACK_BASE = 0x100000
    
    
    Severity: Major
    Found in miasm/analysis/sandbox.py and 5 other locations - About 3 hrs to fix
    miasm/analysis/sandbox.py on lines 417..428
    miasm/analysis/sandbox.py on lines 445..456
    miasm/analysis/sandbox.py on lines 459..470
    miasm/analysis/sandbox.py on lines 473..484
    miasm/analysis/sandbox.py on lines 487..498

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 73.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 6 locations. Consider refactoring.
    Open

    class Arch_aarch64b(Arch):
        _ARCH_ = "aarch64b"
        STACK_SIZE = 0x100000
        STACK_BASE = 0x100000
    
    
    Severity: Major
    Found in miasm/analysis/sandbox.py and 5 other locations - About 3 hrs to fix
    miasm/analysis/sandbox.py on lines 417..428
    miasm/analysis/sandbox.py on lines 431..442
    miasm/analysis/sandbox.py on lines 445..456
    miasm/analysis/sandbox.py on lines 459..470
    miasm/analysis/sandbox.py on lines 473..484

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 73.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Severity
    Category
    Status
    Source
    Language