tensorflow/models

View on GitHub
official/nlp/modeling/layers/text_layers.py

Summary

Maintainability
D
2 days
Test Coverage

File text_layers.py has 606 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2024 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Major
Found in official/nlp/modeling/layers/text_layers.py - About 1 day to fix

    Function __init__ has 9 arguments (exceeds 4 allowed). Consider refactoring.
    Open

      def __init__(self,
    Severity: Major
    Found in official/nlp/modeling/layers/text_layers.py - About 1 hr to fix

      Function __init__ has 8 arguments (exceeds 4 allowed). Consider refactoring.
      Open

        def __init__(self,
      Severity: Major
      Found in official/nlp/modeling/layers/text_layers.py - About 1 hr to fix

        Function __init__ has 6 arguments (exceeds 4 allowed). Consider refactoring.
        Open

          def __init__(self, *,
        Severity: Minor
        Found in official/nlp/modeling/layers/text_layers.py - About 45 mins to fix

          Function bert_pack_inputs has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
          Open

            def bert_pack_inputs(inputs: Union[tf.RaggedTensor, List[tf.RaggedTensor]],
                                 seq_length: Union[int, tf.Tensor],
                                 start_of_sequence_id: Union[int, tf.Tensor],
                                 end_of_segment_id: Union[int, tf.Tensor],
                                 padding_id: Union[int, tf.Tensor],
          Severity: Minor
          Found in official/nlp/modeling/layers/text_layers.py - About 45 mins to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
          Open

            def __init__(self,
          Severity: Minor
          Found in official/nlp/modeling/layers/text_layers.py - About 35 mins to fix

            Function bert_pack_inputs has 5 arguments (exceeds 4 allowed). Consider refactoring.
            Open

              def bert_pack_inputs(inputs: Union[tf.RaggedTensor, List[tf.RaggedTensor]],
            Severity: Minor
            Found in official/nlp/modeling/layers/text_layers.py - About 35 mins to fix

              Function _init_token_ids has 5 arguments (exceeds 4 allowed). Consider refactoring.
              Open

                def _init_token_ids(
              Severity: Minor
              Found in official/nlp/modeling/layers/text_layers.py - About 35 mins to fix

                Function _create_special_tokens_dict has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                Open

                  def _create_special_tokens_dict(self, vocab_table, vocab_file):
                    special_tokens = dict(start_of_sequence_id="[CLS]",
                                          end_of_segment_id="[SEP]",
                                          padding_id="[PAD]",
                                          mask_id="[MASK]")
                Severity: Minor
                Found in official/nlp/modeling/layers/text_layers.py - About 25 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function call has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                Open

                  def call(self, inputs: tf.Tensor):
                    """Calls `text.SentencepieceTokenizer` on inputs.
                
                    Args:
                      inputs: A string Tensor of shape `(batch_size,)`.
                Severity: Minor
                Found in official/nlp/modeling/layers/text_layers.py - About 25 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function _create_special_tokens_dict has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                Open

                  def _create_special_tokens_dict(self):
                    special_tokens = dict(
                        start_of_sequence_id=b"[CLS]",
                        end_of_segment_id=b"[SEP]",
                        padding_id=b"<pad>",
                Severity: Minor
                Found in official/nlp/modeling/layers/text_layers.py - About 25 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function _init_token_ids has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                Open

                  def _init_token_ids(
                      self, *,
                      start_of_sequence_id,
                      end_of_segment_id,
                      padding_id,
                Severity: Minor
                Found in official/nlp/modeling/layers/text_layers.py - About 25 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                There are no issues that match your filters.

                Category
                Status