python-security/pyt

View on GitHub

Showing 158 of 158 total issues

File stmt_visitor.py has 932 lines of code (exceeds 250 allowed). Consider refactoring.
Open

import ast
import itertools
import logging
import os.path
from pkgutil import iter_modules
Severity: Major
Found in pyt/cfg/stmt_visitor.py - About 2 days to fix

    LabelVisitor has 62 functions (exceeds 20 allowed). Consider refactoring.
    Open

    class LabelVisitor(ast.NodeVisitor):
        def __init__(self):
            self.result = ''
    
        def handle_comma_separated(self, comma_separated_list):
    Severity: Major
    Found in pyt/helper_visitors/label_visitor.py - About 1 day to fix

      File expr_visitor.py has 480 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      import ast
      import logging
      
      from .alias_helper import handle_aliases_in_calls
      from ..core.ast_helper import (
      Severity: Minor
      Found in pyt/cfg/expr_visitor.py - About 7 hrs to fix

        Identical blocks of code found in 2 locations. Consider refactoring.
        Open

            def slicev(self, node):
                if isinstance(node, ast.Slice):
                    if node.lower:
                        self.visit(node.lower)
                    if node.upper:
        Severity: Major
        Found in pyt/helper_visitors/label_visitor.py and 1 other location - About 7 hrs to fix
        pyt/helper_visitors/vars_visitor.py on lines 131..144

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 116.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        Identical blocks of code found in 2 locations. Consider refactoring.
        Open

            def slicev(self, node):
                if isinstance(node, ast.Slice):
                    if node.lower:
                        self.visit(node.lower)
                    if node.upper:
        Severity: Major
        Found in pyt/helper_visitors/vars_visitor.py and 1 other location - About 7 hrs to fix
        pyt/helper_visitors/label_visitor.py on lines 177..190

        Duplicated Code

        Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

        Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

        When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

        Tuning

        This issue has a mass of 116.

        We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

        The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

        If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

        See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

        Refactorings

        Further Reading

        File vulnerabilities.py has 458 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        """Module for finding vulnerabilities based on a definitions file."""
        
        import ast
        import json
        from collections import defaultdict
        Severity: Minor
        Found in pyt/vulnerabilities/vulnerabilities.py - About 7 hrs to fix

          Function add_module has a Cognitive Complexity of 38 (exceeds 5 allowed). Consider refactoring.
          Open

              def add_module(  # noqa: C901
                  self,
                  module,
                  module_or_package_name,
                  local_names,
          Severity: Minor
          Found in pyt/cfg/stmt_visitor.py - About 5 hrs to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                      for module in self.project_modules:
                          if name.name == module[0]:
                              if os.path.isdir(module[1]):
                                  return self.import_package(
                                      module,
          Severity: Major
          Found in pyt/cfg/stmt_visitor.py and 1 other location - About 5 hrs to fix
          pyt/cfg/stmt_visitor.py on lines 1030..1043

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 94.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

                      for module in self.local_modules:
                          if name.name == module[0]:
                              if os.path.isdir(module[1]):
                                  return self.import_package(
                                      module,
          Severity: Major
          Found in pyt/cfg/stmt_visitor.py and 1 other location - About 5 hrs to fix
          pyt/cfg/stmt_visitor.py on lines 1045..1058

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 94.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          StmtVisitor has 40 functions (exceeds 20 allowed). Consider refactoring.
          Open

          class StmtVisitor(ast.NodeVisitor):
              def __init__(self, allow_local_directory_imports=True):
                  self._allow_local_modules = allow_local_directory_imports
                  super().__init__()
          
          
          Severity: Minor
          Found in pyt/cfg/stmt_visitor.py - About 5 hrs to fix

            Function save_def_args_in_temp has a Cognitive Complexity of 28 (exceeds 5 allowed). Consider refactoring.
            Open

                def save_def_args_in_temp(
                    self,
                    call_args,
                    def_args,
                    line_number,
            Severity: Minor
            Found in pyt/cfg/expr_visitor.py - About 4 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                def __str__(self):
                    output = ''
                    for x, n in enumerate(self.nodes):
                        output = ''.join((output, 'Node: ' + str(x) + ' ' + str(n), '\n\n'))
                    return output
            Severity: Major
            Found in pyt/cfg/make_cfg.py and 1 other location - About 3 hrs to fix
            pyt/cfg/make_cfg.py on lines 15..19

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 70.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                def __repr__(self):
                    output = ''
                    for x, n in enumerate(self.nodes):
                        output = ''.join((output, 'Node: ' + str(x) + ' ' + repr(n), '\n\n'))
                    return output
            Severity: Major
            Found in pyt/cfg/make_cfg.py and 1 other location - About 3 hrs to fix
            pyt/cfg/make_cfg.py on lines 21..25

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 70.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Function add_blackbox_or_builtin_call has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
            Open

                def add_blackbox_or_builtin_call(self, node, blackbox):  # noqa: C901
                    """Processes a blackbox or builtin function when it is called.
                    Nothing gets assigned to ret_func_foo in the builtin/blackbox case.
            
                    Increments self.function_call_index each time it is called, we can refer to it as N in the comments.
            Severity: Minor
            Found in pyt/cfg/stmt_visitor.py - About 3 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function discover_files has a Cognitive Complexity of 24 (exceeds 5 allowed). Consider refactoring.
            Open

            def discover_files(targets, excluded_files, recursive=False):
                included_files = list()
                excluded_list = excluded_files.split(",")
                for target in targets:
                    if os.path.isdir(target):
            Severity: Minor
            Found in pyt/__main__.py - About 3 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function visit_Import has a Cognitive Complexity of 24 (exceeds 5 allowed). Consider refactoring.
            Open

                def visit_Import(self, node):
                    for name in node.names:
                        for module in self.local_modules:
                            if name.name == module[0]:
                                if os.path.isdir(module[1]):
            Severity: Minor
            Found in pyt/cfg/stmt_visitor.py - About 3 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Cyclomatic complexity is too high in function main. (15)
            Open

            def main(command_line_args=sys.argv[1:]):  # noqa: C901
                args = parse_args(command_line_args)
            
                logging_level = (
                    logging.ERROR if not args.verbose else
            Severity: Minor
            Found in pyt/__main__.py by radon

            Cyclomatic Complexity

            Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

            Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

            Construct Effect on CC Reasoning
            if +1 An if statement is a single decision.
            elif +1 The elif statement adds another decision.
            else +0 The else statement does not cause a new decision. The decision is at the if.
            for +1 There is a decision at the start of the loop.
            while +1 There is a decision at the while statement.
            except +1 Each except branch adds a new conditional path of execution.
            finally +0 The finally block is unconditionally executed.
            with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
            assert +1 The assert statement internally roughly equals a conditional statement.
            Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
            Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

            Source: http://radon.readthedocs.org/en/latest/intro.html

            Function get_vulnerability has a Cognitive Complexity of 23 (exceeds 5 allowed). Consider refactoring.
            Open

            def get_vulnerability(
                source,
                sink,
                triggers,
                lattice,
            Severity: Minor
            Found in pyt/vulnerabilities/vulnerabilities.py - About 3 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function stmt_star_handler has a Cognitive Complexity of 23 (exceeds 5 allowed). Consider refactoring.
            Open

                def stmt_star_handler(
                    self,
                    stmts,
                    prev_node_to_avoid=None
                ):
            Severity: Minor
            Found in pyt/cfg/stmt_visitor.py - About 3 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

            def handle_aliases_in_init_files(name, import_alias_mapping):
                """Returns either None or the handled alias.
                Used in add_module.
                """
                for key, val in import_alias_mapping.items():
            Severity: Major
            Found in pyt/cfg/alias_helper.py and 1 other location - About 3 hrs to fix
            pyt/cfg/alias_helper.py on lines 15..29

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 63.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Severity
            Category
            Status
            Source
            Language