tensorflow/models

View on GitHub
official/nlp/data/create_xlnet_pretraining_data.py

Summary

Maintainability
F
3 days
Test Coverage

File create_xlnet_pretraining_data.py has 573 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2024 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Major
Found in official/nlp/data/create_xlnet_pretraining_data.py - About 1 day to fix

    Function _create_a_and_b_segments has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
    Open

    def _create_a_and_b_segments(
        tokens: np.array,
        sentence_ids: np.array,
        begin_index: int,
        total_length: int,
    Severity: Minor
    Found in official/nlp/data/create_xlnet_pretraining_data.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function preprocess_and_tokenize_input_files has a Cognitive Complexity of 20 (exceeds 5 allowed). Consider refactoring.
    Open

    def preprocess_and_tokenize_input_files(
        input_files: Iterable[str],
        tokenizer: tokenization.FullSentencePieceTokenizer,
        use_eod: bool = True,
        do_lower_case: bool = False,
    Severity: Minor
    Found in official/nlp/data/create_xlnet_pretraining_data.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function create_tfrecords has 15 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def create_tfrecords(
    Severity: Major
    Found in official/nlp/data/create_xlnet_pretraining_data.py - About 1 hr to fix

      Function get_tfrecord_name has 13 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def get_tfrecord_name(
      Severity: Major
      Found in official/nlp/data/create_xlnet_pretraining_data.py - About 1 hr to fix

        Function _convert_tokens_to_instances has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
        Open

        def _convert_tokens_to_instances(
            tokens: np.array,
            sentence_ids: np.array,
            per_host_batch_size: int,
            seq_length: int,
        Severity: Minor
        Found in official/nlp/data/create_xlnet_pretraining_data.py - About 1 hr to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function get_tfrecord_name has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
        Open

        def get_tfrecord_name(
            per_host_batch_size: int,
            num_cores_per_host: int,
            seq_length: int,
            bi_data: bool,
        Severity: Minor
        Found in official/nlp/data/create_xlnet_pretraining_data.py - About 1 hr to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function _convert_tokens_to_instances has 9 arguments (exceeds 4 allowed). Consider refactoring.
        Open

        def _convert_tokens_to_instances(
        Severity: Major
        Found in official/nlp/data/create_xlnet_pretraining_data.py - About 1 hr to fix

          Function create_tfrecords has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
          Open

          def create_tfrecords(
              tokenizer: tokenization.FullSentencePieceTokenizer,
              input_file_or_files: str,
              use_eod_token: bool,
              do_lower_case: bool,
          Severity: Minor
          Found in official/nlp/data/create_xlnet_pretraining_data.py - About 1 hr to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          Avoid deeply nested control flow statements.
          Open

                    if use_eod:
                      token_ids = [eod_symbol]
                      sentence_id = not sentence_id
                    else:
                      continue
          Severity: Major
          Found in official/nlp/data/create_xlnet_pretraining_data.py - About 45 mins to fix

            Function preprocess_and_tokenize_input_files has 5 arguments (exceeds 4 allowed). Consider refactoring.
            Open

            def preprocess_and_tokenize_input_files(
            Severity: Minor
            Found in official/nlp/data/create_xlnet_pretraining_data.py - About 35 mins to fix

              Function _create_a_and_b_segments has 5 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def _create_a_and_b_segments(
              Severity: Minor
              Found in official/nlp/data/create_xlnet_pretraining_data.py - About 35 mins to fix

                Function shuffle_and_combine_preprocessed_data has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                Open

                def shuffle_and_combine_preprocessed_data(
                    all_data: List[Tuple[np.array, np.array]]) -> Tuple[np.array, np.array]:
                  """Shuffles and combines preprocessed token/sentence IDs from documents."""
                  document_permutation = np.random.permutation(len(all_data))
                
                
                Severity: Minor
                Found in official/nlp/data/create_xlnet_pretraining_data.py - About 25 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                There are no issues that match your filters.

                Category
                Status