tensorflow/tensorflow

View on GitHub
tensorflow/lite/python/util.py

Summary

Maintainability
F
2 wks
Test Coverage

File util.py has 882 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Major
Found in tensorflow/lite/python/util.py - About 2 days to fix

    Function _remove_redundant_quantize_ops_per_subgraph has a Cognitive Complexity of 57 (exceeds 5 allowed). Consider refactoring.
    Open

    def _remove_redundant_quantize_ops_per_subgraph(model, subgraph_index,
                                                    signature_index):
      """Remove redundant quantize ops per subgraph."""
      subgraph = model.subgraphs[subgraph_index]
      tensors = subgraph.tensors
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 1 day to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _modify_model_output_type_per_subgraph has a Cognitive Complexity of 53 (exceeds 5 allowed). Consider refactoring.
    Open

    def _modify_model_output_type_per_subgraph(model, subgraph_index,
                                               signature_index,
                                               inference_output_type):
      """Modify model output type per subgraph."""
      subgraph = model.subgraphs[subgraph_index]
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 1 day to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _modify_model_input_type_per_subgraph has a Cognitive Complexity of 46 (exceeds 5 allowed). Consider refactoring.
    Open

    def _modify_model_input_type_per_subgraph(model, subgraph_index,
                                              signature_index,
                                              inference_input_type):
      """Modify model input type per subgraph."""
      subgraph = model.subgraphs[subgraph_index]
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 7 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function get_model_hash has a Cognitive Complexity of 36 (exceeds 5 allowed). Consider refactoring.
    Open

    def get_model_hash(model):
      """Calculate a 64-bit integer hash for a TensorFlow Lite model based on its structure.
    
      Args:
          model: A TensorFlow Lite model object.
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 5 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _remove_tensors_from_model has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
    Open

    def _remove_tensors_from_model(model, remove_tensors_idxs):
      """Remove tensors from model."""
      if not remove_tensors_idxs:
        return
      if len(model.subgraphs) > 1:
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function build_debug_info_func has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
    Open

    def build_debug_info_func(original_graph):
      """Returns a method to retrieve the `GraphDebugInfo` from the original graph.
    
      Args:
        original_graph: The original `Graph` containing all the op stack traces.
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function set_tensor_shapes has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
    Open

    def set_tensor_shapes(tensors, shapes):
      """Sets Tensor shape for each tensor if the shape is defined.
    
      Args:
        tensors: TensorFlow tensor.Tensor.
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function get_sparsity_modes has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
    Open

    def get_sparsity_modes(model_object):
      """Get sparsity modes used in a tflite model.
    
      The sparsity modes are listed in conversion_metadata.fbs file.
    
    
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function get_debug_info has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
    Open

    def get_debug_info(nodes_to_debug_info_func, converted_graph):
      """Returns the debug info for the original nodes in the `converted_graph`.
    
      Args:
        nodes_to_debug_info_func: The method to collect the op debug info for the
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function get_tensors_from_tensor_names has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
    Open

    def get_tensors_from_tensor_names(graph, tensor_names):
      """Gets the Tensors associated with the `tensor_names` in the provided graph.
    
      Args:
        graph: TensorFlow Graph.
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function convert_bytes_to_c_source has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

    def convert_bytes_to_c_source(data,
                                  array_name,
                                  max_line_width=80,
                                  include_guard=None,
                                  include_path=None,
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 55 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function populate_conversion_metadata has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

    def populate_conversion_metadata(model_object, metadata):
      """Add or update conversion metadata to a tflite model.
    
      Args:
        model_object: A tflite model in object form.
    Severity: Minor
    Found in tensorflow/lite/python/util.py - About 55 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Avoid deeply nested control flow statements.
    Open

              if output.tensorIndex == dequant_op.outputs[0]:
                output.tensorIndex = op.inputs[0]
          operators.remove(op)
    Severity: Major
    Found in tensorflow/lite/python/util.py - About 45 mins to fix

      Function convert_bytes_to_c_source has 6 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def convert_bytes_to_c_source(data,
      Severity: Minor
      Found in tensorflow/lite/python/util.py - About 45 mins to fix

        Avoid deeply nested control flow statements.
        Open

                  for idx, input_tensor in enumerate(op_user.inputs):
                    if input_tensor == deleted_tensor:
                      op_user.inputs[idx] = requantize_op.outputs[0]
              operators.remove(op)
        Severity: Major
        Found in tensorflow/lite/python/util.py - About 45 mins to fix

          Avoid deeply nested control flow statements.
          Open

                    if signature_def.inputs[i].tensorIndex == op.inputs[0]:
                      signature_def.inputs[i].tensorIndex = op.outputs[0]
                remove_tensors_idxs.add(op.inputs[0])
          Severity: Major
          Found in tensorflow/lite/python/util.py - About 45 mins to fix

            Avoid deeply nested control flow statements.
            Open

                      if signature_def.outputs[i].tensorIndex == op.outputs[0]:
                        signature_def.outputs[i].tensorIndex = op.inputs[0]
                  remove_tensors_idxs.add(op.outputs[0])
            Severity: Major
            Found in tensorflow/lite/python/util.py - About 45 mins to fix

              Avoid deeply nested control flow statements.
              Open

                        if buffer.data is not None:
                          hash_value = update_hash_with_primitive_value(
                              hash_value, len(buffer.data)
                          )
              
              
              Severity: Major
              Found in tensorflow/lite/python/util.py - About 45 mins to fix

                Avoid deeply nested control flow statements.
                Open

                          if output.tensorIndex == op.outputs[0]:
                            output.tensorIndex = op.inputs[0]
                      deleted_tensor = requantize_op.inputs[0]
                Severity: Major
                Found in tensorflow/lite/python/util.py - About 45 mins to fix

                  Function run_graph_optimizations has 5 arguments (exceeds 4 allowed). Consider refactoring.
                  Open

                  def run_graph_optimizations(graph_def,
                  Severity: Minor
                  Found in tensorflow/lite/python/util.py - About 35 mins to fix

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                      for op in operators:
                        # Find operators that dequantize model output
                        if (op.opcodeIndex in dequant_opcode_idxs and
                            op.outputs[0] in subgraph.outputs):
                          # If found, validate that the operator's output type is float
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 2 days to fix
                    tensorflow/lite/python/util.py on lines 669..702

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 260.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                      for op in operators:
                        # Find operators that quantize model input
                        if op.opcodeIndex in quant_opcode_idxs and op.inputs[0] in subgraph.inputs:
                          float_tensor, quant_tensor = tensors[op.inputs[0]], tensors[op.outputs[0]]
                          # If found, validate that the operator's input type is float
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 2 days to fix
                    tensorflow/lite/python/util.py on lines 776..810

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 260.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                      if inference_input_type == dtypes.uint8:
                        # Change quant op (float to int8) to quant op (uint8 to int8)
                        for op in input_quant_ops:
                          int8_quantization = tensors[op.outputs[0]].quantization
                          uint8_quantization = schema_fb.QuantizationParametersT()
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 1 day to fix
                    tensorflow/lite/python/util.py on lines 820..861

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 202.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                      if inference_output_type == dtypes.uint8:
                        # Find a quantize operator
                        quant_opcode_idx = -1
                        for idx, opcode in enumerate(model.operatorCodes):
                          builtin_code = schema_util.get_builtin_code_from_operator_code(opcode)
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 1 day to fix
                    tensorflow/lite/python/util.py on lines 712..738

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 202.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                    def _modify_model_input_type(model, inference_input_type=dtypes.float32):
                      """Modify model input type."""
                      if inference_input_type == dtypes.float32:
                        return
                    
                    
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 4 hrs to fix
                    tensorflow/lite/python/util.py on lines 741..753

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 75.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                    def _modify_model_output_type(model, inference_output_type=dtypes.float32):
                      """Modify model output type."""
                      if inference_output_type == dtypes.float32:
                        return
                    
                    
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 4 hrs to fix
                    tensorflow/lite/python/util.py on lines 635..646

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 75.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                      for array in input_arrays:
                        signature.inputs[array.name].name = array.name
                        signature.inputs[array.name].dtype = array.dtype.as_datatype_enum
                        signature.inputs[array.name].tensor_shape.CopyFrom(array.shape.as_proto())
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 3 hrs to fix
                    tensorflow/lite/python/util.py on lines 245..248

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 73.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                      for array in output_arrays:
                        signature.outputs[array.name].name = array.name
                        signature.outputs[array.name].dtype = array.dtype.as_datatype_enum
                        signature.outputs[array.name].tensor_shape.CopyFrom(array.shape.as_proto())
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 3 hrs to fix
                    tensorflow/lite/python/util.py on lines 240..243

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 73.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                    def get_quantize_opcode_idx(model):
                      """Returns the quantize op idx."""
                      quant_opcode_idxs = []
                      for idx, opcode in enumerate(model.operatorCodes):
                        builtin_code = schema_util.get_builtin_code_from_operator_code(opcode)
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 3 hrs to fix
                    tensorflow/lite/python/util.py on lines 568..575

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 66.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                    def get_dequantize_opcode_idx(model):
                      """Returns the quantize op idx."""
                      quant_opcode_idxs = []
                      for idx, opcode in enumerate(model.operatorCodes):
                        builtin_code = schema_util.get_builtin_code_from_operator_code(opcode)
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 3 hrs to fix
                    tensorflow/lite/python/util.py on lines 558..565

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 66.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                      if operators and not dequant_opcode_idxs:
                        for output in subgraph.outputs:
                          output_type = _convert_tflite_enum_type_to_tf_type(tensors[output].type)
                          if output_type == dtypes.float32:
                            raise ValueError("Model output is not dequantized.")
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 2 hrs to fix
                    tensorflow/lite/python/util.py on lines 659..665

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 58.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                      if operators and not quant_opcode_idxs:
                        for input_idx in subgraph.inputs:
                          input_type = _convert_tflite_enum_type_to_tf_type(tensors[input_idx].type)
                          if input_type == dtypes.float32:
                            raise ValueError("Model input is not dequantized.")
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 2 hrs to fix
                    tensorflow/lite/python/util.py on lines 766..772

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 58.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                            for output in signature_def.outputs:
                              if output.tensorIndex == op.outputs[0]:
                                output.tensorIndex = op.inputs[0]
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 1 hr to fix
                    tensorflow/lite/python/util.py on lines 934..936

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 43.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Similar blocks of code found in 2 locations. Consider refactoring.
                    Open

                            for output in signature_def.outputs:
                              if output.tensorIndex == dequant_op.outputs[0]:
                                output.tensorIndex = op.inputs[0]
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 1 hr to fix
                    tensorflow/lite/python/util.py on lines 912..914

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 43.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Identical blocks of code found in 2 locations. Consider refactoring.
                    Open

                        if (len(array_line) + 4) > max_line_width:
                          array_lines.append(array_line + "\n")
                          array_line = starting_pad
                    Severity: Major
                    Found in tensorflow/lite/python/util.py and 1 other location - About 1 hr to fix
                    tensorflow/lite/experimental/acceleration/compatibility/convert_binary_to_cc_source.py on lines 57..59

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 40.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    Identical blocks of code found in 2 locations. Consider refactoring.
                    Open

                      if len(array_line) > len(starting_pad):
                        array_lines.append(array_line + "\n")
                    Severity: Minor
                    Found in tensorflow/lite/python/util.py and 1 other location - About 35 mins to fix
                    tensorflow/lite/experimental/acceleration/compatibility/convert_binary_to_cc_source.py on lines 61..62

                    Duplicated Code

                    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                    Tuning

                    This issue has a mass of 33.

                    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                    Refactorings

                    Further Reading

                    There are no issues that match your filters.

                    Category
                    Status