tomato42/tlsfuzzer

View on GitHub
tlsfuzzer/analysis.py

Summary

Maintainability
F
2 wks
Test Coverage
A
98%

File analysis.py has 2377 lines of code (exceeds 500 allowed). Consider refactoring.
Open

#!/usr/bin/python
# -*- coding: utf-8 -*-

# Author: Jan Koscielniak, (c) 2020
# Author: Hubert Kario, (c) 2020
Severity: Major
Found in tlsfuzzer/analysis.py - About 5 days to fix

    Function analyze_bit_sizes has a Cognitive Complexity of 70 (exceeds 10 allowed). Consider refactoring.
    Open

        def analyze_bit_sizes(self):
            """
            Analyses K bit-sizes and creates the plots and the test result files
            which are placed in an analysis_results directory in the output folder.
    
    
    Severity: Minor
    Found in tlsfuzzer/analysis.py - About 1 day to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Analysis has 61 functions (exceeds 20 allowed). Consider refactoring.
    Open

    class Analysis(object):
        """Analyse extracted timing information from csv file."""
    
        def __init__(self, output, draw_ecdf_plot=True, draw_scatter_plot=True,
                     draw_conf_interval_plot=True, multithreaded_graph=False,
    Severity: Major
    Found in tlsfuzzer/analysis.py - About 1 day to fix

      Function _analyse_weight_pairs has a Cognitive Complexity of 44 (exceeds 10 allowed). Consider refactoring.
      Open

          def _analyse_weight_pairs(self, pairs):
              out_dir = self.output
              output_files = dict()
              if self.run_sign_test:
                  output_files['sign_test'] = open(
      Severity: Minor
      Found in tlsfuzzer/analysis.py - About 5 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Cyclomatic complexity is too high in method analyze_bit_sizes. (35)
      Open

          def analyze_bit_sizes(self):
              """
              Analyses K bit-sizes and creates the plots and the test result files
              which are placed in an analysis_results directory in the output folder.
      
      
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Cyclomatic complexity is too high in method _analyse_weight_pairs. (34)
      Open

          def _analyse_weight_pairs(self, pairs):
              out_dir = self.output
              output_files = dict()
              if self.run_sign_test:
                  output_files['sign_test'] = open(
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Cyclomatic complexity is too high in function main. (26)
      Open

      def main():
          """Process arguments and start analysis."""
          output = None
          ecdf_plot = True
          scatter_plot = True
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Cyclomatic complexity is too high in method _figure_out_analysis_data_size. (22)
      Open

          def _figure_out_analysis_data_size(self):
              pair = TestPair(0, 1)
              old_vebose = self.verbose
              self.verbose = False
              max_limit = 0
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Function main has a Cognitive Complexity of 27 (exceeds 10 allowed). Consider refactoring.
      Open

      def main():
          """Process arguments and start analysis."""
          output = None
          ecdf_plot = True
          scatter_plot = True
      Severity: Minor
      Found in tlsfuzzer/analysis.py - About 3 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Cyclomatic complexity is too high in method _write_individual_results. (15)
      Open

          def _write_individual_results(self):
              """Write results to report.csv"""
              if self.verbose:
                  start_time = time.time()
                  print("[i] Starting calculation of individual results")
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Cyclomatic complexity is too high in method _write_summary. (15)
      Open

          def _write_summary(self, difference, p_vals, sign_p_vals, worst_pair,
                             friedman_p, worst_pair_conf_int):
              """Write the report.txt file and print summary."""
              report_filename = join(self.output, "report.csv")
              text_report_filename = join(self.output, "report.txt")
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Function _split_data_to_pairwise has a Cognitive Complexity of 22 (exceeds 10 allowed). Consider refactoring.
      Open

          def _split_data_to_pairwise(self, name):
              data = self._read_hamming_weight_data(name)
              try:
                  pair_writers = dict()
      
      
      Severity: Minor
      Found in tlsfuzzer/analysis.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Cyclomatic complexity is too high in method create_k_specific_dirs. (13)
      Open

          def create_k_specific_dirs(self):
              """
              Creates a folder with timing.csv for each K bit-size so it can be
              analyzed one at a time.
              """
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Cyclomatic complexity is too high in method _convert_to_binary. (12)
      Open

          def _convert_to_binary(self):
              timing_bin_path = join(self.output, "timing.bin")
              timing_csv_path = join(self.output, "timing.csv")
              legend_csv_path = join(self.output, "legend.csv")
              timing_bin_shape_path = join(self.output, "timing.bin.shape")
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Cyclomatic complexity is too high in method conf_interval_plot. (12)
      Open

          def conf_interval_plot(self):
              """Generate the confidence inteval for differences between samples."""
              if not self.draw_conf_interval_plot:
                  return
              if self.verbose:
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Cyclomatic complexity is too high in method _split_data_to_pairwise. (12)
      Open

          def _split_data_to_pairwise(self, name):
              data = self._read_hamming_weight_data(name)
              try:
                  pair_writers = dict()
      
      
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Cyclomatic complexity is too high in method diff_ecdf_plot. (11)
      Open

          def diff_ecdf_plot(self):
              """Generate ECDF plot of differences between test classes."""
              if not self.draw_ecdf_plot:
                  return
              if self.verbose:
      Severity: Minor
      Found in tlsfuzzer/analysis.py by radon

      Cyclomatic Complexity

      Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

      Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

      Construct Effect on CC Reasoning
      if +1 An if statement is a single decision.
      elif +1 The elif statement adds another decision.
      else +0 The else statement does not cause a new decision. The decision is at the if.
      for +1 There is a decision at the start of the loop.
      while +1 There is a decision at the while statement.
      except +1 Each except branch adds a new conditional path of execution.
      finally +0 The finally block is unconditionally executed.
      with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
      assert +1 The assert statement internally roughly equals a conditional statement.
      Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
      Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

      Source: http://radon.readthedocs.org/en/latest/intro.html

      Function _write_summary has a Cognitive Complexity of 16 (exceeds 10 allowed). Consider refactoring.
      Open

          def _write_summary(self, difference, p_vals, sign_p_vals, worst_pair,
                             friedman_p, worst_pair_conf_int):
              """Write the report.txt file and print summary."""
              report_filename = join(self.output, "report.csv")
              text_report_filename = join(self.output, "report.txt")
      Severity: Minor
      Found in tlsfuzzer/analysis.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function _figure_out_analysis_data_size has a Cognitive Complexity of 16 (exceeds 10 allowed). Consider refactoring.
      Open

          def _figure_out_analysis_data_size(self):
              pair = TestPair(0, 1)
              old_vebose = self.verbose
              self.verbose = False
              max_limit = 0
      Severity: Minor
      Found in tlsfuzzer/analysis.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function _write_individual_results has a Cognitive Complexity of 15 (exceeds 10 allowed). Consider refactoring.
      Open

          def _write_individual_results(self):
              """Write results to report.csv"""
              if self.verbose:
                  start_time = time.time()
                  print("[i] Starting calculation of individual results")
      Severity: Minor
      Found in tlsfuzzer/analysis.py - About 1 hr to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function _figure_out_analysis_data_size has 27 lines of code (exceeds 25 allowed). Consider refactoring.
      Open

          def _figure_out_analysis_data_size(self):
              pair = TestPair(0, 1)
              old_vebose = self.verbose
              self.verbose = False
              max_limit = 0
      Severity: Minor
      Found in tlsfuzzer/analysis.py - About 1 hr to fix

        Function main has 26 lines of code (exceeds 25 allowed). Consider refactoring.
        Open

        def main():
            """Process arguments and start analysis."""
            output = None
            ecdf_plot = True
            scatter_plot = True
        Severity: Minor
        Found in tlsfuzzer/analysis.py - About 1 hr to fix

          Function diff_scatter_plot has 26 lines of code (exceeds 25 allowed). Consider refactoring.
          Open

              def diff_scatter_plot(self):
                  """Generate scatter plot showing differences between samples."""
                  if not self.draw_scatter_plot:
                      return
                  if self.verbose:
          Severity: Minor
          Found in tlsfuzzer/analysis.py - About 1 hr to fix

            Function create_k_specific_dirs has a Cognitive Complexity of 13 (exceeds 10 allowed). Consider refactoring.
            Open

                def create_k_specific_dirs(self):
                    """
                    Creates a folder with timing.csv for each K bit-size so it can be
                    analyzed one at a time.
                    """
            Severity: Minor
            Found in tlsfuzzer/analysis.py - About 45 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function analyse_hamming_weights has a Cognitive Complexity of 13 (exceeds 10 allowed). Consider refactoring.
            Open

                def analyse_hamming_weights(self):
                    name = join(self.output, self.measurements_filename)
            
                    self._hamming_weight_report += "tlsfuzzer analyse.py version {0} "\
                        .format(VERSION)
            Severity: Minor
            Found in tlsfuzzer/analysis.py - About 45 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Avoid deeply nested control flow statements.
            Open

                                    if float(row[1]) > float(row[0]):
                                        passed += 1
                                    total += 1
            Severity: Major
            Found in tlsfuzzer/analysis.py - About 45 mins to fix

              Function diff_ecdf_plot has a Cognitive Complexity of 12 (exceeds 10 allowed). Consider refactoring.
              Open

                  def diff_ecdf_plot(self):
                      """Generate ECDF plot of differences between test classes."""
                      if not self.draw_ecdf_plot:
                          return
                      if self.verbose:
              Severity: Minor
              Found in tlsfuzzer/analysis.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function _bit_size_smart_analysis_worker has a Cognitive Complexity of 12 (exceeds 10 allowed). Consider refactoring.
              Open

                  def _bit_size_smart_analysis_worker(self, args):
                      name_bin, bounds = args
                      start, end = bounds
                      max_k_size_value = -1
                      prev_tupple_id = -1
              Severity: Minor
              Found in tlsfuzzer/analysis.py - About 35 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function conf_interval_plot has a Cognitive Complexity of 11 (exceeds 10 allowed). Consider refactoring.
              Open

                  def conf_interval_plot(self):
                      """Generate the confidence inteval for differences between samples."""
                      if not self.draw_conf_interval_plot:
                          return
                      if self.verbose:
              Severity: Minor
              Found in tlsfuzzer/analysis.py - About 25 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function _read_bit_size_measurement_file has a Cognitive Complexity of 11 (exceeds 10 allowed). Consider refactoring.
              Open

                  def _read_bit_size_measurement_file(self, status=None):
                      """Returns an iterator with the data from the measurements file."""
                      with open(join(self.output, self.measurements_filename), 'r') as in_fp:
                          if status:
                              in_fp.seek(0, 2)
              Severity: Minor
              Found in tlsfuzzer/analysis.py - About 25 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function _bit_size_come_to_verdict has a Cognitive Complexity of 11 (exceeds 10 allowed). Consider refactoring.
              Open

                  def _bit_size_come_to_verdict(self, analysis_ret_val,
                                                skillings_mack_pvalue):
                      """Comes to a verdict if implementation is vulnerable"""
                      explanation = None
                      difference = 1
              Severity: Minor
              Found in tlsfuzzer/analysis.py - About 25 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                              if bootstraping_of_size["trim_mean_45"][0] < 0:
                                  trim_mean_45 = "{0:.3e} (±{1:.2e}s)".format(
                                      bootstraping_of_size["trim_mean_45"][0],
                                      bootstraping_of_size["trim_mean_45"][1]
                                  )
              Severity: Major
              Found in tlsfuzzer/analysis.py and 1 other location - About 4 hrs to fix
              tlsfuzzer/analysis.py on lines 1753..1761

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 95.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                              if bootstraping_of_size["trim_mean_05"][0] < 0:
                                  trim_mean_05 = "{0:.3e} (±{1:.2e}s)".format(
                                      bootstraping_of_size["trim_mean_05"][0],
                                      bootstraping_of_size["trim_mean_05"][1]
                                  )
              Severity: Major
              Found in tlsfuzzer/analysis.py and 1 other location - About 4 hrs to fix
              tlsfuzzer/analysis.py on lines 1764..1772

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 95.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 3 locations. Consider refactoring.
              Open

                  def rel_t_test(self):
                      """Cross-test all classes using the t-test for dependent, paired
                      samples."""
                      if self.verbose:
                          start_time = time.time()
              Severity: Major
              Found in tlsfuzzer/analysis.py and 2 other locations - About 3 hrs to fix
              tlsfuzzer/analysis.py on lines 439..448
              tlsfuzzer/analysis.py on lines 1069..1081

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 76.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 3 locations. Consider refactoring.
              Open

                  def wilcoxon_test(self):
                      """Cross-test all classes with the Wilcoxon signed-rank test"""
                      if self.verbose:
                          start_time = time.time()
                          print("[i] Starting Wilcoxon signed-rank test")
              Severity: Major
              Found in tlsfuzzer/analysis.py and 2 other locations - About 3 hrs to fix
              tlsfuzzer/analysis.py on lines 455..465
              tlsfuzzer/analysis.py on lines 1069..1081

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 76.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 3 locations. Consider refactoring.
              Open

                  def desc_stats(self):
                      """Calculate the descriptive statistics for sample differences."""
                      if self.verbose:
                          start_time = time.time()
                          print("[i] Calculating descriptive statistics of sample "
              Severity: Major
              Found in tlsfuzzer/analysis.py and 2 other locations - About 3 hrs to fix
              tlsfuzzer/analysis.py on lines 439..448
              tlsfuzzer/analysis.py on lines 455..465

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 76.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                      if os.path.isfile(timing_bin_path) and \
                              os.path.isfile(legend_csv_path) and \
                              os.path.isfile(timing_bin_shape_path) and \
                              os.path.getmtime(timing_csv_path) < \
                              os.path.getmtime(timing_bin_path):
              Severity: Major
              Found in tlsfuzzer/analysis.py and 1 other location - About 1 hr to fix
              tlsfuzzer/analysis.py on lines 1519..1523

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 58.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                      if os.path.isfile(measurements_bin_path) and \
                              os.path.isfile(measurements_bin_shape_path) and \
                              os.path.isfile(measurements_csv_path) and \
                              os.path.getmtime(measurements_csv_path) < \
                              os.path.getmtime(measurements_bin_path):  # pragma: no cover
              Severity: Major
              Found in tlsfuzzer/analysis.py and 1 other location - About 1 hr to fix
              tlsfuzzer/analysis.py on lines 289..293

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 58.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 3 locations. Consider refactoring.
              Open

                      data = np.memmap(name_bin,
                                       dtype=[('tuple_num', np.dtype('i8')),
                                              ('k_size', np.dtype('i2')),
                                              ('value', np.dtype('f8'))],
              Severity: Major
              Found in tlsfuzzer/analysis.py and 2 other locations - About 1 hr to fix
              tlsfuzzer/analysis.py on lines 1602..1605
              tlsfuzzer/analysis.py on lines 2180..2183

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 49.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 3 locations. Consider refactoring.
              Open

                      data = np.memmap(measurements_bin_path,
                                       dtype=[('block', np.dtype('i8')),
                                              ('group', np.dtype('i2')),
                                              ('value', np.dtype('f8'))],
              Severity: Major
              Found in tlsfuzzer/analysis.py and 2 other locations - About 1 hr to fix
              tlsfuzzer/analysis.py on lines 1791..1794
              tlsfuzzer/analysis.py on lines 2180..2183

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 49.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 3 locations. Consider refactoring.
              Open

                      data = np.memmap(name_bin,
                                       dtype=[('tuple_num', np.dtype('i8')),
                                              ('k_size', np.dtype('i2')),
                                              ('value', np.dtype('f8'))],
              Severity: Major
              Found in tlsfuzzer/analysis.py and 2 other locations - About 1 hr to fix
              tlsfuzzer/analysis.py on lines 1602..1605
              tlsfuzzer/analysis.py on lines 1791..1794

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 49.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 3 locations. Consider refactoring.
              Open

                                  if self.run_sign_test:
                                      results = self.sign_test()
                                      output_files['sign_test'].write(
                                          "{0} to {1}: {2}\n".format(
                                              base_group, test_group, results[(0, 1)]))
              Severity: Major
              Found in tlsfuzzer/analysis.py and 2 other locations - About 55 mins to fix
              tlsfuzzer/analysis.py on lines 2679..2683
              tlsfuzzer/analysis.py on lines 2685..2689

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 47.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 3 locations. Consider refactoring.
              Open

                                  if self.run_wilcoxon_test:
                                      results = self.wilcoxon_test()
                                      output_files['wilcoxon_test'].write(
                                          "{0} to {1}: {2}\n".format(
                                              base_group, test_group, results[(0, 1)]))
              Severity: Major
              Found in tlsfuzzer/analysis.py and 2 other locations - About 55 mins to fix
              tlsfuzzer/analysis.py on lines 2673..2677
              tlsfuzzer/analysis.py on lines 2685..2689

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 47.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 3 locations. Consider refactoring.
              Open

                                  if self.run_t_test:
                                      results = self.rel_t_test()
                                      output_files['t_test'].write(
                                          "{0} to {1}: {2}\n".format(
                                              base_group, test_group, results[(0, 1)]))
              Severity: Major
              Found in tlsfuzzer/analysis.py and 2 other locations - About 55 mins to fix
              tlsfuzzer/analysis.py on lines 2673..2677
              tlsfuzzer/analysis.py on lines 2679..2683

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 47.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                          if name == "trim_mean_05":
                              name_readable = "trim mean (5%)"
                          elif name == "trim_mean_25":
                              name_readable = "trim mean (25%)"
                          elif name == "trim_mean_45":
              Severity: Minor
              Found in tlsfuzzer/analysis.py and 1 other location - About 55 mins to fix
              tlsfuzzer/analysis.py on lines 1034..1039

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 47.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                          if name == "trim mean (5%)":
                              name = "trim_mean_05"
                          elif name == "trim mean (25%)":
                              name = "trim_mean_25"
                          elif name == "trim mean (45%)":
              Severity: Minor
              Found in tlsfuzzer/analysis.py and 1 other location - About 55 mins to fix
              tlsfuzzer/analysis.py on lines 2110..2115

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 47.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Identical blocks of code found in 2 locations. Consider refactoring.
              Open

                      total_non_max_data = sum(self._k_sizes[i] for i in self._k_sizes
                                               if i != max(self._k_sizes.keys()))
              Severity: Minor
              Found in tlsfuzzer/analysis.py and 1 other location - About 45 mins to fix
              tlsfuzzer/analysis.py on lines 1712..1713

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 45.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Identical blocks of code found in 2 locations. Consider refactoring.
              Open

                      total_non_max_data = sum(self._k_sizes[i] for i in self._k_sizes
                                               if i != max(self._k_sizes.keys()))
              Severity: Minor
              Found in tlsfuzzer/analysis.py and 1 other location - About 45 mins to fix
              tlsfuzzer/analysis.py on lines 2336..2337

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 45.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Continuation line unaligned for hanging indent
              Open

                                  .format(skillings_mack_pvalue) +
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Continuation lines indentation.

              Continuation lines should align wrapped elements either vertically
              using Python's implicit line joining inside parentheses, brackets
              and braces, or using a hanging indent.
              
              When using a hanging indent these considerations should be applied:
              - there should be no arguments on the first line, and
              - further indentation should be used to clearly distinguish itself
                as a continuation line.
              
              Okay: a = (\n)
              E123: a = (\n    )
              
              Okay: a = (\n    42)
              E121: a = (\n   42)
              E122: a = (\n42)
              E123: a = (\n    42\n    )
              E124: a = (24,\n     42\n)
              E125: if (\n    b):\n    pass
              E126: a = (\n        42)
              E127: a = (24,\n      42)
              E128: a = (24,\n    42)
              E129: if (a or\n    b):\n    pass
              E131: a = (\n    42\n 24)

              Continuation line unaligned for hanging indent
              Open

                                  .format(
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Continuation lines indentation.

              Continuation lines should align wrapped elements either vertically
              using Python's implicit line joining inside parentheses, brackets
              and braces, or using a hanging indent.
              
              When using a hanging indent these considerations should be applied:
              - there should be no arguments on the first line, and
              - further indentation should be used to clearly distinguish itself
                as a continuation line.
              
              Okay: a = (\n)
              E123: a = (\n    )
              
              Okay: a = (\n    42)
              E121: a = (\n   42)
              E122: a = (\n42)
              E123: a = (\n    42\n    )
              E124: a = (24,\n     42\n)
              E125: if (\n    b):\n    pass
              E126: a = (\n        42)
              E127: a = (24,\n      42)
              E128: a = (24,\n    42)
              E129: if (a or\n    b):\n    pass
              E131: a = (\n    42\n 24)

              Continuation line unaligned for hanging indent
              Open

                                  .format(
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Continuation lines indentation.

              Continuation lines should align wrapped elements either vertically
              using Python's implicit line joining inside parentheses, brackets
              and braces, or using a hanging indent.
              
              When using a hanging indent these considerations should be applied:
              - there should be no arguments on the first line, and
              - further indentation should be used to clearly distinguish itself
                as a continuation line.
              
              Okay: a = (\n)
              E123: a = (\n    )
              
              Okay: a = (\n    42)
              E121: a = (\n   42)
              E122: a = (\n42)
              E123: a = (\n    42\n    )
              E124: a = (24,\n     42\n)
              E125: if (\n    b):\n    pass
              E126: a = (\n        42)
              E127: a = (24,\n      42)
              E128: a = (24,\n    42)
              E129: if (a or\n    b):\n    pass
              E131: a = (\n    42\n 24)

              Too many blank lines (2)
              Open

                      if self.bit_recognition_size >= len(self._k_sizes):
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Separate top-level function and class definitions with two blank lines.

              Method definitions inside a class are separated by a single blank
              line.
              
              Extra blank lines may be used (sparingly) to separate groups of
              related functions.  Blank lines may be omitted between a bunch of
              related one-liners (e.g. a set of dummy implementations).
              
              Use blank lines in functions, sparingly, to indicate logical
              sections.
              
              Okay: def a():\n    pass\n\n\ndef b():\n    pass
              Okay: def a():\n    pass\n\n\nasync def b():\n    pass
              Okay: def a():\n    pass\n\n\n# Foo\n# Bar\n\ndef b():\n    pass
              Okay: default = 1\nfoo = 1
              Okay: classify = 1\nfoo = 1
              
              E301: class Foo:\n    b = 0\n    def bar():\n        pass
              E302: def a():\n    pass\n\ndef b(n):\n    pass
              E302: def a():\n    pass\n\nasync def b(n):\n    pass
              E303: def a():\n    pass\n\n\n\ndef b(n):\n    pass
              E303: def a():\n\n\n\n    pass
              E304: @decorator\n\ndef a():\n    pass
              E305: def a():\n    pass\na()
              E306: def a():\n    def b():\n        pass\n    def c():\n        pass

              Continuation line unaligned for hanging indent
              Open

                                  .format(
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Continuation lines indentation.

              Continuation lines should align wrapped elements either vertically
              using Python's implicit line joining inside parentheses, brackets
              and braces, or using a hanging indent.
              
              When using a hanging indent these considerations should be applied:
              - there should be no arguments on the first line, and
              - further indentation should be used to clearly distinguish itself
                as a continuation line.
              
              Okay: a = (\n)
              E123: a = (\n    )
              
              Okay: a = (\n    42)
              E121: a = (\n   42)
              E122: a = (\n42)
              E123: a = (\n    42\n    )
              E124: a = (24,\n     42\n)
              E125: if (\n    b):\n    pass
              E126: a = (\n        42)
              E127: a = (24,\n      42)
              E128: a = (24,\n    42)
              E129: if (a or\n    b):\n    pass
              E131: a = (\n    42\n 24)

              Continuation line unaligned for hanging indent
              Open

                                  .format(self._bit_size_data_limit) +
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Continuation lines indentation.

              Continuation lines should align wrapped elements either vertically
              using Python's implicit line joining inside parentheses, brackets
              and braces, or using a hanging indent.
              
              When using a hanging indent these considerations should be applied:
              - there should be no arguments on the first line, and
              - further indentation should be used to clearly distinguish itself
                as a continuation line.
              
              Okay: a = (\n)
              E123: a = (\n    )
              
              Okay: a = (\n    42)
              E121: a = (\n   42)
              E122: a = (\n42)
              E123: a = (\n    42\n    )
              E124: a = (24,\n     42\n)
              E125: if (\n    b):\n    pass
              E126: a = (\n        42)
              E127: a = (24,\n      42)
              E128: a = (24,\n    42)
              E129: if (a or\n    b):\n    pass
              E131: a = (\n    42\n 24)

              The backslash is redundant between brackets
              Open

                                  "K size of {0}: {1} ({2} out of {3} passed)\n"\
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Avoid explicit line join between brackets.

              The preferred way of wrapping long lines is by using Python's
              implied line continuation inside parentheses, brackets and braces.
              Long lines can be broken over multiple lines by wrapping expressions
              in parentheses.  These should be used in preference to using a
              backslash for line continuation.
              
              E502: aaa = [123, \\n       123]
              E502: aaa = ("bbb " \\n       "ccc")
              
              Okay: aaa = [123,\n       123]
              Okay: aaa = ("bbb "\n       "ccc")
              Okay: aaa = "bbb " \\n    "ccc"
              Okay: aaa = 123  # \\

              Continuation line unaligned for hanging indent
              Open

                                          .format(
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Continuation lines indentation.

              Continuation lines should align wrapped elements either vertically
              using Python's implicit line joining inside parentheses, brackets
              and braces, or using a hanging indent.
              
              When using a hanging indent these considerations should be applied:
              - there should be no arguments on the first line, and
              - further indentation should be used to clearly distinguish itself
                as a continuation line.
              
              Okay: a = (\n)
              E123: a = (\n    )
              
              Okay: a = (\n    42)
              E121: a = (\n   42)
              E122: a = (\n42)
              E123: a = (\n    42\n    )
              E124: a = (24,\n     42\n)
              E125: if (\n    b):\n    pass
              E126: a = (\n        42)
              E127: a = (24,\n      42)
              E128: a = (24,\n    42)
              E129: if (a or\n    b):\n    pass
              E131: a = (\n    42\n 24)

              Continuation line unaligned for hanging indent
              Open

                                          .format(base_group, test_group))
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Continuation lines indentation.

              Continuation lines should align wrapped elements either vertically
              using Python's implicit line joining inside parentheses, brackets
              and braces, or using a hanging indent.
              
              When using a hanging indent these considerations should be applied:
              - there should be no arguments on the first line, and
              - further indentation should be used to clearly distinguish itself
                as a continuation line.
              
              Okay: a = (\n)
              E123: a = (\n    )
              
              Okay: a = (\n    42)
              E121: a = (\n   42)
              E122: a = (\n42)
              E123: a = (\n    42\n    )
              E124: a = (24,\n     42\n)
              E125: if (\n    b):\n    pass
              E126: a = (\n        42)
              E127: a = (24,\n      42)
              E128: a = (24,\n    42)
              E129: if (a or\n    b):\n    pass
              E131: a = (\n    42\n 24)

              Continuation line unaligned for hanging indent
              Open

                                  .format(VERSION) +
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Continuation lines indentation.

              Continuation lines should align wrapped elements either vertically
              using Python's implicit line joining inside parentheses, brackets
              and braces, or using a hanging indent.
              
              When using a hanging indent these considerations should be applied:
              - there should be no arguments on the first line, and
              - further indentation should be used to clearly distinguish itself
                as a continuation line.
              
              Okay: a = (\n)
              E123: a = (\n    )
              
              Okay: a = (\n    42)
              E121: a = (\n   42)
              E122: a = (\n42)
              E123: a = (\n    42\n    )
              E124: a = (24,\n     42\n)
              E125: if (\n    b):\n    pass
              E126: a = (\n        42)
              E127: a = (24,\n      42)
              E128: a = (24,\n    42)
              E129: if (a or\n    b):\n    pass
              E131: a = (\n    42\n 24)

              Continuation line unaligned for hanging indent
              Open

                                  .format(
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Continuation lines indentation.

              Continuation lines should align wrapped elements either vertically
              using Python's implicit line joining inside parentheses, brackets
              and braces, or using a hanging indent.
              
              When using a hanging indent these considerations should be applied:
              - there should be no arguments on the first line, and
              - further indentation should be used to clearly distinguish itself
                as a continuation line.
              
              Okay: a = (\n)
              E123: a = (\n    )
              
              Okay: a = (\n    42)
              E121: a = (\n   42)
              E122: a = (\n42)
              E123: a = (\n    42\n    )
              E124: a = (24,\n     42\n)
              E125: if (\n    b):\n    pass
              E126: a = (\n        42)
              E127: a = (24,\n      42)
              E128: a = (24,\n    42)
              E129: if (a or\n    b):\n    pass
              E131: a = (\n    42\n 24)

              Continuation line unaligned for hanging indent
              Open

                                      .format(k_size, pvalue, passed, total)
              Severity: Minor
              Found in tlsfuzzer/analysis.py by pep8

              Continuation lines indentation.

              Continuation lines should align wrapped elements either vertically
              using Python's implicit line joining inside parentheses, brackets
              and braces, or using a hanging indent.
              
              When using a hanging indent these considerations should be applied:
              - there should be no arguments on the first line, and
              - further indentation should be used to clearly distinguish itself
                as a continuation line.
              
              Okay: a = (\n)
              E123: a = (\n    )
              
              Okay: a = (\n    42)
              E121: a = (\n   42)
              E122: a = (\n42)
              E123: a = (\n    42\n    )
              E124: a = (24,\n     42\n)
              E125: if (\n    b):\n    pass
              E126: a = (\n        42)
              E127: a = (24,\n      42)
              E128: a = (24,\n    42)
              E129: if (a or\n    b):\n    pass
              E131: a = (\n    42\n 24)

              There are no issues that match your filters.

              Category
              Status