tensorflow/models

View on GitHub
official/legacy/xlnet/preprocess_pretrain_data.py

Summary

Maintainability
F
1 wk
Test Coverage

File preprocess_pretrain_data.py has 733 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# Copyright 2024 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Severity: Major
Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 1 day to fix

    Function _sample_mask_ngram has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
    Open

    def _sample_mask_ngram(sp, seg, reverse=False, max_gram=5,
                           goal_num_predict=None):
      """Sample `goal_num_predict` tokens for partial prediction."""
    
      seg_len = len(seg)
    Severity: Minor
    Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _create_data has a Cognitive Complexity of 26 (exceeds 5 allowed). Consider refactoring.
    Open

    def _create_data(idx, input_paths):
      """Creates data."""
      # Load sentence-piece model
      sp = spm.SentencePieceProcessor()
      sp.Load(FLAGS.sp_path)
    Severity: Minor
    Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _split_a_and_b has a Cognitive Complexity of 26 (exceeds 5 allowed). Consider refactoring.
    Open

    def _split_a_and_b(data, sent_ids, begin_idx, tot_len, extend_target=False):
      """Split two segments from `data` starting from the index `begin_idx`."""
    
      data_len = data.shape[0]
      if begin_idx + tot_len >= data_len:
    Severity: Minor
    Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function create_tfrecords has a Cognitive Complexity of 24 (exceeds 5 allowed). Consider refactoring.
    Open

    def create_tfrecords(save_dir, basename, data, bsz_per_host, seq_len,
                         bi_data, sp):
      """Creates TFRecords."""
      data, sent_ids = data[0], data[1]
    
    
    Severity: Minor
    Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _sample_mask has a Cognitive Complexity of 23 (exceeds 5 allowed). Consider refactoring.
    Open

    def _sample_mask(sp, seg, reverse=False, max_gram=5, goal_num_predict=None):
      """Samples `goal_num_predict` tokens for partial prediction."""
      seg_len = len(seg)
      mask = np.array([False] * seg_len, dtype=bool)
    
    
    Severity: Minor
    Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function get_input_fn has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
    Open

    def get_input_fn(
        tfrecord_dir,
        split,
        bsz_per_host,
        seq_len,
    Severity: Minor
    Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function get_input_fn has 15 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def get_input_fn(
    Severity: Major
    Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 1 hr to fix

      Function get_dataset has 13 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def get_dataset(params, num_hosts, num_core_per_host, split, file_names,
      Severity: Major
      Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 1 hr to fix

        Function format_filename has 10 arguments (exceeds 4 allowed). Consider refactoring.
        Open

        def format_filename(prefix, bsz_per_host, seq_len, bi_data, suffix,
        Severity: Major
        Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 1 hr to fix

          Function parse_files_to_dataset has 8 arguments (exceeds 4 allowed). Consider refactoring.
          Open

          def parse_files_to_dataset(parser, file_names, split, num_batch, num_hosts,
          Severity: Major
          Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 1 hr to fix

            Function create_tfrecords has 7 arguments (exceeds 4 allowed). Consider refactoring.
            Open

            def create_tfrecords(save_dir, basename, data, bsz_per_host, seq_len,
            Severity: Major
            Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 50 mins to fix

              Function _convert_example has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
              Open

              def _convert_example(example, use_bfloat16):
                """Cast int64 into int32 and float32 to bfloat16 if use_bfloat16."""
                for key in list(example.keys()):
                  val = example[key]
                  if tf_keras.backend.is_sparse(val):
              Severity: Minor
              Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 45 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function get_dataset has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
              Open

              def get_dataset(params, num_hosts, num_core_per_host, split, file_names,
                              num_batch, seq_len, reuse_len, perm_size, mask_alpha,
                              mask_beta, use_bfloat16=False, num_predict=None):
                """Gets the dataset."""
              
              
              Severity: Minor
              Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 45 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function format_filename has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
              Open

              def format_filename(prefix, bsz_per_host, seq_len, bi_data, suffix,
                                  mask_alpha=5, mask_beta=1, reuse_len=None, uncased=False,
                                  fixed_num_predict=None):
                """docs."""
                if reuse_len is None:
              Severity: Minor
              Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 45 mins to fix

              Cognitive Complexity

              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

              A method's cognitive complexity is based on a few simple rules:

              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
              • Code is considered more complex for each "break in the linear flow of the code"
              • Code is considered more complex when "flow breaking structures are nested"

              Further reading

              Function _sample_mask has 5 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def _sample_mask(sp, seg, reverse=False, max_gram=5, goal_num_predict=None):
              Severity: Minor
              Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 35 mins to fix

                Function _split_a_and_b has 5 arguments (exceeds 4 allowed). Consider refactoring.
                Open

                def _split_a_and_b(data, sent_ids, begin_idx, tot_len, extend_target=False):
                Severity: Minor
                Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 35 mins to fix

                  Function _local_perm has 5 arguments (exceeds 4 allowed). Consider refactoring.
                  Open

                  def _local_perm(inputs, targets, is_masked, perm_size, seq_len):
                  Severity: Minor
                  Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 35 mins to fix

                    Function _sample_mask_ngram has 5 arguments (exceeds 4 allowed). Consider refactoring.
                    Open

                    def _sample_mask_ngram(sp, seg, reverse=False, max_gram=5,
                    Severity: Minor
                    Found in official/legacy/xlnet/preprocess_pretrain_data.py - About 35 mins to fix

                      Identical blocks of code found in 2 locations. Consider refactoring.
                      Open

                          if num_predict is not None:
                            indices = tf.range(seq_len, dtype=tf.int64)
                            bool_target_mask = tf.cast(target_mask, tf.bool)
                            indices = tf.boolean_mask(indices, bool_target_mask)
                      
                      
                      Severity: Major
                      Found in official/legacy/xlnet/preprocess_pretrain_data.py and 1 other location - About 3 days to fix
                      official/legacy/xlnet/data_utils.py on lines 497..528

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 334.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Identical blocks of code found in 2 locations. Consider refactoring.
                      Open

                        while goal_num_predict is not None and num_predict < goal_num_predict:
                          i = np.random.randint(seg_len)
                          if not mask[i]:
                            mask[i] = True
                            num_predict += 1
                      Severity: Major
                      Found in official/legacy/xlnet/preprocess_pretrain_data.py and 1 other location - About 3 hrs to fix
                      official/legacy/xlnet/preprocess_pretrain_data.py on lines 397..401

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 66.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Identical blocks of code found in 2 locations. Consider refactoring.
                      Open

                        while goal_num_predict is not None and num_predict < goal_num_predict:
                          i = np.random.randint(seg_len)
                          if not mask[i]:
                            mask[i] = True
                            num_predict += 1
                      Severity: Major
                      Found in official/legacy/xlnet/preprocess_pretrain_data.py and 1 other location - About 3 hrs to fix
                      official/legacy/xlnet/preprocess_pretrain_data.py on lines 464..468

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 66.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Identical blocks of code found in 2 locations. Consider refactoring.
                      Open

                          for filename in cur_record_info["filenames"]:
                            basename = os.path.basename(filename)
                            new_filename = os.path.join(record_dir, basename)
                            new_filenames.append(new_filename)
                      Severity: Major
                      Found in official/legacy/xlnet/preprocess_pretrain_data.py and 1 other location - About 1 hr to fix
                      official/legacy/xlnet/data_utils.py on lines 648..651

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 48.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Identical blocks of code found in 2 locations. Consider refactoring.
                      Open

                          while beg < seg_len and not _is_start_piece(sp.IdToPiece(seg[beg].item())):
                            beg += 1
                      Severity: Major
                      Found in official/legacy/xlnet/preprocess_pretrain_data.py and 1 other location - About 1 hr to fix
                      official/legacy/xlnet/preprocess_pretrain_data.py on lines 375..376

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 46.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Identical blocks of code found in 2 locations. Consider refactoring.
                      Open

                          while beg < seg_len and not _is_start_piece(sp.IdToPiece(seg[beg].item())):
                            beg += 1
                      Severity: Major
                      Found in official/legacy/xlnet/preprocess_pretrain_data.py and 1 other location - About 1 hr to fix
                      official/legacy/xlnet/preprocess_pretrain_data.py on lines 438..439

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 46.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 2 locations. Consider refactoring.
                      Open

                          perm_mask_0, target_0, target_mask_0, input_k_0, input_q_0 = _local_perm(
                              inputs[:reuse_len],
                              target[:reuse_len],
                              is_masked[:reuse_len],
                              perm_size,
                      Severity: Minor
                      Found in official/legacy/xlnet/preprocess_pretrain_data.py and 1 other location - About 55 mins to fix
                      official/legacy/xlnet/preprocess_pretrain_data.py on lines 776..781

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 37.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Similar blocks of code found in 2 locations. Consider refactoring.
                      Open

                          perm_mask_1, target_1, target_mask_1, input_k_1, input_q_1 = _local_perm(
                              inputs[reuse_len:],
                              target[reuse_len:],
                              is_masked[reuse_len:],
                              perm_size,
                      Severity: Minor
                      Found in official/legacy/xlnet/preprocess_pretrain_data.py and 1 other location - About 55 mins to fix
                      official/legacy/xlnet/preprocess_pretrain_data.py on lines 769..774

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 37.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      Identical blocks of code found in 2 locations. Consider refactoring.
                      Open

                        non_func_tokens = tf.logical_not(tf.logical_or(
                            tf.equal(inputs, SEP_ID),
                            tf.equal(inputs, CLS_ID)))
                      Severity: Minor
                      Found in official/legacy/xlnet/preprocess_pretrain_data.py and 1 other location - About 30 mins to fix
                      official/legacy/xlnet/data_utils.py on lines 770..771

                      Duplicated Code

                      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                      Tuning

                      This issue has a mass of 32.

                      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                      Refactorings

                      Further Reading

                      There are no issues that match your filters.

                      Category
                      Status