tensorflow/models

View on GitHub

Showing 11,634 of 11,634 total issues

Avoid deeply nested control flow statements.
Open

          if span_contains_answer:
            answer_text = " ".join(tokens[start_position:(end_position + 1)])
            logging.info("start_position: %d", (start_position))
            logging.info("end_position: %d", (end_position))
            logging.info("answer: %s", tokenization.printable_text(answer_text))
Severity: Major
Found in official/nlp/data/squad_lib.py - About 45 mins to fix

    Avoid deeply nested control flow statements.
    Open

              if score_diff > null_score_diff_threshold:
                all_predictions[example.qas_id] = ""
              else:
                all_predictions[example.qas_id] = best_non_null_entry.text
          else:
    Severity: Major
    Found in official/nlp/data/squad_lib.py - About 45 mins to fix

      Function _generate_detections has 6 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def _generate_detections(boxes,
      Severity: Minor
      Found in official/legacy/detection/ops/postprocess_ops.py - About 45 mins to fix

        Avoid deeply nested control flow statements.
        Open

                  if not is_impossible:
                    answer = qa["answers"][0]
                    orig_answer_text = answer["text"]
                    start_position = answer["answer_start"]
                  else:
        Severity: Major
        Found in official/nlp/data/squad_lib_sp.py - About 45 mins to fix

          Function generate_sentence_retrevial_tf_record has 6 arguments (exceeds 4 allowed). Consider refactoring.
          Open

          def generate_sentence_retrevial_tf_record(processor,
          Severity: Minor
          Found in official/nlp/data/sentence_retrieval_lib.py - About 45 mins to fix

            Function truncate_seq_pair has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
            Open

            def truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng):
              """Truncates a pair of sequences to a maximum sequence length."""
              while True:
                total_length = len(tokens_a) + len(tokens_b)
                if total_length <= max_num_tokens:
            Severity: Minor
            Found in official/nlp/data/create_pretraining_data.py - About 45 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Avoid deeply nested control flow statements.
            Open

                      if use_eod:
                        token_ids = [eod_symbol]
                        sentence_id = not sentence_id
                      else:
                        continue
            Severity: Major
            Found in official/nlp/data/create_xlnet_pretraining_data.py - About 45 mins to fix

              Function _create_fake_bert_dataset has 6 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def _create_fake_bert_dataset(
              Severity: Minor
              Found in official/nlp/data/pretrain_dataloader_test.py - About 45 mins to fix

                Function _online_sample_mask has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
                Open

                  def _online_sample_mask(self, inputs: tf.Tensor,
                                          boundary: tf.Tensor) -> tf.Tensor:
                    """Samples target positions for predictions.
                
                    Descriptions of each strategy:
                Severity: Minor
                Found in official/nlp/data/pretrain_dataloader.py - About 45 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Avoid deeply nested control flow statements.
                Open

                          if prev_is_whitespace:
                            doc_tokens.append(c)
                          else:
                            doc_tokens[-1] += c
                          prev_is_whitespace = False
                Severity: Major
                Found in official/nlp/data/squad_lib.py - About 45 mins to fix

                  Avoid deeply nested control flow statements.
                  Open

                            for _ in range(10):
                              random_document_index = rng.randint(0, len(all_documents) - 1)
                              if random_document_index != document_index:
                                break
                  
                  
                  Severity: Major
                  Found in official/nlp/data/create_pretraining_data.py - About 45 mins to fix

                    Avoid deeply nested control flow statements.
                    Open

                              if (len(qa["answers"]) != 1) and (not is_impossible):
                                raise ValueError(
                                    "For training, each question should have exactly 1 answer.")
                              if not is_impossible:
                    Severity: Major
                    Found in official/nlp/data/squad_lib_sp.py - About 45 mins to fix

                      Function _decode has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
                      Open

                        def _decode(self, record: tf.Tensor):
                          """Decodes a serialized tf.Example."""
                          name_to_features = {
                              'input_ids': tf.io.FixedLenFeature([self._seq_length], tf.int64),
                              'input_mask': tf.io.FixedLenFeature([self._seq_length], tf.int64),
                      Severity: Minor
                      Found in official/nlp/data/question_answering_dataloader.py - About 45 mins to fix

                      Cognitive Complexity

                      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                      A method's cognitive complexity is based on a few simple rules:

                      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                      • Code is considered more complex for each "break in the linear flow of the code"
                      • Code is considered more complex when "flow breaking structures are nested"

                      Further reading

                      Function __call__ has 6 arguments (exceeds 4 allowed). Consider refactoring.
                      Open

                        def __call__(self, box_outputs, class_outputs, anchor_boxes, image_shape,
                      Severity: Minor
                      Found in official/legacy/detection/ops/postprocess_ops.py - About 45 mins to fix

                        Function _check_is_max_context has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
                        Open

                        def _check_is_max_context(doc_spans, cur_span_index, position):
                          """Check if this is the 'max context' doc span for the token."""
                        
                          # Because of the sliding window approach taken to scoring documents, a single
                          # token can appear in multiple documents. E.g.
                        Severity: Minor
                        Found in official/nlp/data/squad_lib_sp.py - About 45 mins to fix

                        Cognitive Complexity

                        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                        A method's cognitive complexity is based on a few simple rules:

                        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                        • Code is considered more complex for each "break in the linear flow of the code"
                        • Code is considered more complex when "flow breaking structures are nested"

                        Further reading

                        Function __init__ has 6 arguments (exceeds 4 allowed). Consider refactoring.
                        Open

                          def __init__(self,
                        Severity: Minor
                        Found in official/nlp/data/classifier_data_lib.py - About 45 mins to fix

                          Avoid deeply nested control flow statements.
                          Open

                                    if not is_impossible:
                                      answer = qa["answers"][0]
                                      orig_answer_text = answer["text"]
                                      answer_offset = answer["answer_start"]
                                      answer_length = len(orig_answer_text)
                          Severity: Major
                          Found in official/nlp/data/squad_lib.py - About 45 mins to fix

                            Avoid deeply nested control flow statements.
                            Open

                                        if is_box_lrtb:  # Box in left-right-top-bottom format.
                                          this_level_boxes = box_utils.decode_boxes_lrtb(
                                              this_level_boxes, this_level_anchors)
                                        else:  # Box in standard x-y-h-w format.
                                          this_level_boxes = box_utils.decode_boxes(
                            Severity: Major
                            Found in official/legacy/detection/ops/roi_ops.py - About 45 mins to fix

                              Function _check_is_max_context has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
                              Open

                              def _check_is_max_context(doc_spans, cur_span_index, position):
                                """Check if this is the 'max context' doc span for the token."""
                              
                                # Because of the sliding window approach taken to scoring documents, a single
                                # token can appear in multiple documents. E.g.
                              Severity: Minor
                              Found in official/nlp/data/squad_lib.py - About 45 mins to fix

                              Cognitive Complexity

                              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                              A method's cognitive complexity is based on a few simple rules:

                              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                              • Code is considered more complex for each "break in the linear flow of the code"
                              • Code is considered more complex when "flow breaking structures are nested"

                              Further reading

                              Function _maybe_truncate has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
                              Open

                                def _maybe_truncate(self, inputs):
                                  truncated_inputs = {}
                                  for k, v in inputs.items():
                                    if k == 'inputs' or k == 'targets':
                                      truncated_inputs[k] = tf.pad(
                              Severity: Minor
                              Found in official/nlp/data/wmt_dataloader.py - About 45 mins to fix

                              Cognitive Complexity

                              Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                              A method's cognitive complexity is based on a few simple rules:

                              • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                              • Code is considered more complex for each "break in the linear flow of the code"
                              • Code is considered more complex when "flow breaking structures are nested"

                              Further reading

                              Severity
                              Category
                              Status
                              Source
                              Language