tomato42/tlsfuzzer

View on GitHub

Showing 272 of 274 total issues

File analysis.py has 2221 lines of code (exceeds 500 allowed). Consider refactoring.
Open

#!/usr/bin/python
# -*- coding: utf-8 -*-

# Author: Jan Koscielniak, (c) 2020
# Author: Hubert Kario, (c) 2020
Severity: Major
Found in tlsfuzzer/analysis.py - About 5 days to fix

    File messages.py has 1730 lines of code (exceeds 500 allowed). Consider refactoring.
    Open

    # Author: Hubert Kario, (c) 2015
    # Released under Gnu GPL v2.0, see LICENSE file for details
    
    """Objects for generating TLS messages to send."""
    
    
    Severity: Major
    Found in tlsfuzzer/messages.py - About 3 days to fix

      File expect.py has 1651 lines of code (exceeds 500 allowed). Consider refactoring.
      Open

      # Author: Hubert Kario, (c) 2015
      # Released under Gnu GPL v2.0, see LICENSE file for details
      
      """Parsing and processing of received TLS messages"""
      from __future__ import print_function
      Severity: Major
      Found in tlsfuzzer/expect.py - About 3 days to fix

        File extract.py has 1363 lines of code (exceeds 500 allowed). Consider refactoring.
        Open

        # Author: Jan Koscielniak, (c) 2020
        # Released under Gnu GPL v2.0, see LICENSE file for details
        
        """Extraction and analysis of timing information from a packet capture."""
        
        
        Severity: Major
        Found in tlsfuzzer/extract.py - About 2 days to fix

          Function analyze_bit_sizes has a Cognitive Complexity of 66 (exceeds 10 allowed). Consider refactoring.
          Open

              def analyze_bit_sizes(self):
                  """
                  Analyses K bit-sizes and creates the plots and the test result files
                  which are placed in an analysis_results directory in the output folder.
          
          
          Severity: Minor
          Found in tlsfuzzer/analysis.py - About 1 day to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Analysis has 58 functions (exceeds 20 allowed). Consider refactoring.
          Open

          class Analysis(object):
              """Analyse extracted timing information from csv file."""
          
              def __init__(self, output, draw_ecdf_plot=True, draw_scatter_plot=True,
                           draw_conf_interval_plot=True, multithreaded_graph=False,
          Severity: Major
          Found in tlsfuzzer/analysis.py - About 1 day to fix

            Cyclomatic complexity is too high in function main. (49)
            Open

            def main():
                """Process arguments and start extraction."""
                logfile = None
                capture = None
                output = None
            Severity: Minor
            Found in tlsfuzzer/extract.py by radon

            Cyclomatic Complexity

            Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

            Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

            Construct Effect on CC Reasoning
            if +1 An if statement is a single decision.
            elif +1 The elif statement adds another decision.
            else +0 The else statement does not cause a new decision. The decision is at the if.
            for +1 There is a decision at the start of the loop.
            while +1 There is a decision at the while statement.
            except +1 Each except branch adds a new conditional path of execution.
            finally +0 The finally block is unconditionally executed.
            with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
            assert +1 The assert statement internally roughly equals a conditional statement.
            Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
            Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

            Source: http://radon.readthedocs.org/en/latest/intro.html

            Function run has a Cognitive Complexity of 57 (exceeds 10 allowed). Consider refactoring.
            Open

                def run(self):
                    """Execute conversation"""
                    node = self.conversation
                    try:
                        while node is not None:
            Severity: Minor
            Found in tlsfuzzer/runner.py - About 1 day to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function main has a Cognitive Complexity of 55 (exceeds 10 allowed). Consider refactoring.
            Open

            def main():
                """Process arguments and start extraction."""
                logfile = None
                capture = None
                output = None
            Severity: Minor
            Found in tlsfuzzer/extract.py - About 7 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function _parse_pcap has a Cognitive Complexity of 51 (exceeds 10 allowed). Consider refactoring.
            Open

                def _parse_pcap(self):
                    """Process capture file."""
                    with open(self.capture, 'rb') as pcap:
                        progress = None
                        try:
            Severity: Minor
            Found in tlsfuzzer/extract.py - About 7 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Cyclomatic complexity is too high in method _parse_pcap. (42)
            Open

                def _parse_pcap(self):
                    """Process capture file."""
                    with open(self.capture, 'rb') as pcap:
                        progress = None
                        try:
            Severity: Minor
            Found in tlsfuzzer/extract.py by radon

            Cyclomatic Complexity

            Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

            Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

            Construct Effect on CC Reasoning
            if +1 An if statement is a single decision.
            elif +1 The elif statement adds another decision.
            else +0 The else statement does not cause a new decision. The decision is at the if.
            for +1 There is a decision at the start of the loop.
            while +1 There is a decision at the while statement.
            except +1 Each except branch adds a new conditional path of execution.
            finally +0 The finally block is unconditionally executed.
            with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
            assert +1 The assert statement internally roughly equals a conditional statement.
            Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
            Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

            Source: http://radon.readthedocs.org/en/latest/intro.html

            Consider simplifying this complex logical expression.
            Open

                                if (tcp_pkt.flags & dpkt.tcp.TH_SYN and
                                        tcp_pkt.dport == self.port and
                                        ip_pkt.dst == self.ip_address):
                                    # a SYN packet was found - new connection
                                    # (if a retransmission it won't be counted as at least
            Severity: Critical
            Found in tlsfuzzer/extract.py - About 6 hrs to fix

              Identical blocks of code found in 3 locations. Consider refactoring.
              Open

                      if self.verbose and self._total_measurements:
                          status = [0, self._total_measurements, Event()]
                          kwargs = {}
                          kwargs['unit'] = ' signatures'
                          kwargs['prefix'] = 'decimal'
              Severity: Major
              Found in tlsfuzzer/extract.py and 2 other locations - About 5 hrs to fix
              tlsfuzzer/extract.py on lines 1026..1035
              tlsfuzzer/extract.py on lines 1404..1413

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 105.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Identical blocks of code found in 3 locations. Consider refactoring.
              Open

                          if self.verbose and self._total_measurements:
                              status = [0, self._total_measurements, Event()]
                              kwargs = {}
                              kwargs['unit'] = ' signatures'
                              kwargs['prefix'] = 'decimal'
              Severity: Major
              Found in tlsfuzzer/extract.py and 2 other locations - About 5 hrs to fix
              tlsfuzzer/extract.py on lines 1267..1276
              tlsfuzzer/extract.py on lines 1404..1413

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 105.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Identical blocks of code found in 3 locations. Consider refactoring.
              Open

                      if self.verbose and self._total_measurements:
                          status = [0, self._total_measurements, Event()]
                          kwargs = {}
                          kwargs['unit'] = ' signatures'
                          kwargs['prefix'] = 'decimal'
              Severity: Major
              Found in tlsfuzzer/extract.py and 2 other locations - About 5 hrs to fix
              tlsfuzzer/extract.py on lines 1026..1035
              tlsfuzzer/extract.py on lines 1267..1276

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 105.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                  @staticmethod
                  def _sig_alg_for_ecdsa_key(accept_sig_algs, version, key):
                      """Select an acceptable signature algorithm for a given ecdsa key."""
                      if version < (3, 3):
                          # in TLS 1.1 and earlier, there is no algorithm selection,
              Severity: Major
              Found in tlsfuzzer/messages.py and 1 other location - About 5 hrs to fix
              tlsfuzzer/messages.py on lines 1101..1117

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 103.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                  @staticmethod
                  def _sig_alg_for_dsa_key(accept_sig_algs, version, key):
                      """Select an acceptable signature algorithm for a given DSA key."""
                      if version < (3, 3):
                          # in TLS 1.1 and earlier, there is no algorithm selection,
              Severity: Major
              Found in tlsfuzzer/messages.py and 1 other location - About 5 hrs to fix
              tlsfuzzer/messages.py on lines 1083..1099

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 103.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Cyclomatic complexity is too high in method analyze_bit_sizes. (32)
              Open

                  def analyze_bit_sizes(self):
                      """
                      Analyses K bit-sizes and creates the plots and the test result files
                      which are placed in an analysis_results directory in the output folder.
              
              
              Severity: Minor
              Found in tlsfuzzer/analysis.py by radon

              Cyclomatic Complexity

              Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

              Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

              Construct Effect on CC Reasoning
              if +1 An if statement is a single decision.
              elif +1 The elif statement adds another decision.
              else +0 The else statement does not cause a new decision. The decision is at the if.
              for +1 There is a decision at the start of the loop.
              while +1 There is a decision at the while statement.
              except +1 Each except branch adds a new conditional path of execution.
              finally +0 The finally block is unconditionally executed.
              with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
              assert +1 The assert statement internally roughly equals a conditional statement.
              Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
              Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

              Source: http://radon.readthedocs.org/en/latest/intro.html

              Function _analyse_weight_pairs has a Cognitive Complexity of 40 (exceeds 10 allowed). Consider refactoring.
              Open

                  def _analyse_weight_pairs(self, pairs):
                      out_dir = self.output
                      output_files = dict()
                      if self.run_sign_test:
                          output_files['sign_test'] = open(
              Severity: Minor
              Found in tlsfuzzer/analysis.py - About 5 hrs to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Cyclomatic complexity is too high in method _analyse_weight_pairs. (31)
              Open

                  def _analyse_weight_pairs(self, pairs):
                      out_dir = self.output
                      output_files = dict()
                      if self.run_sign_test:
                          output_files['sign_test'] = open(
              Severity: Minor
              Found in tlsfuzzer/analysis.py by radon

              Cyclomatic Complexity

              Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

              Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

              Construct Effect on CC Reasoning
              if +1 An if statement is a single decision.
              elif +1 The elif statement adds another decision.
              else +0 The else statement does not cause a new decision. The decision is at the if.
              for +1 There is a decision at the start of the loop.
              while +1 There is a decision at the while statement.
              except +1 Each except branch adds a new conditional path of execution.
              finally +0 The finally block is unconditionally executed.
              with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
              assert +1 The assert statement internally roughly equals a conditional statement.
              Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
              Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

              Source: http://radon.readthedocs.org/en/latest/intro.html

              Severity
              Category
              Status
              Source
              Language