tensorflow/models

View on GitHub
official/nlp/modeling/networks/bert_encoder.py

Summary

Maintainability
D
2 days
Test Coverage

File bert_encoder.py has 534 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2024 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Major
Found in official/nlp/modeling/networks/bert_encoder.py - About 1 day to fix

    Function __init__ has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
    Open

      def __init__(
          self,
          vocab_size,
          hidden_size=768,
          num_layers=12,
    Severity: Minor
    Found in official/nlp/modeling/networks/bert_encoder.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __init__ has 20 arguments (exceeds 4 allowed). Consider refactoring.
    Open

      def __init__(
    Severity: Major
    Found in official/nlp/modeling/networks/bert_encoder.py - About 2 hrs to fix

      Function __init__ has 19 arguments (exceeds 4 allowed). Consider refactoring.
      Open

        def __init__(
      Severity: Major
      Found in official/nlp/modeling/networks/bert_encoder.py - About 2 hrs to fix

        Function __init__ has 46 lines of code (exceeds 25 allowed). Consider refactoring.
        Open

          def __init__(
              self,
              vocab_size,
              hidden_size=768,
              num_layers=12,
        Severity: Minor
        Found in official/nlp/modeling/networks/bert_encoder.py - About 1 hr to fix

          Function __init__ has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
          Open

            def __init__(
                self,
                vocab_size: int,
                hidden_size: int = 768,
                num_layers: int = 12,
          Severity: Minor
          Found in official/nlp/modeling/networks/bert_encoder.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function call has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
          Open

            def call(self, inputs):
              word_embeddings = None
              if isinstance(inputs, dict):
                word_ids = inputs.get('input_word_ids')
                mask = inputs.get('input_mask')
          Severity: Minor
          Found in official/nlp/modeling/networks/bert_encoder.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Function _get_embeddings has 5 arguments (exceeds 4 allowed). Consider refactoring.
          Open

            def _get_embeddings(self, word_ids: tf.Tensor, type_ids: tf.Tensor,
          Severity: Minor
          Found in official/nlp/modeling/networks/bert_encoder.py - About 35 mins to fix

            There are no issues that match your filters.

            Category
            Status