fedspendingtransparency/usaspending-api

View on GitHub

Showing 156 of 218 total issues

File download_column_historical_lookups.py has 3065 lines of code (exceeds 250 allowed). Consider refactoring.
Open

"""
Sets up mappings from column names used in downloads to the query paths used to get the data from django.

Not in use while we pull CSV data from the non-historical tables. Until we switch to pulling CSV downloads from the
historical tables TransactionFPDS and TransactionFABS, import download_column_lookups.py instead.
Severity: Major
Found in usaspending_api/download/v2/download_column_historical_lookups.py - About 1 wk to fix

    Function matview_search_filter has a Cognitive Complexity of 176 (exceeds 15 allowed). Consider refactoring.
    Open

    def matview_search_filter(filters, model, for_downloads=False):
        queryset = model.objects.all()
    
        recipient_scope_q = Q(recipient_location_country_code="USA") | Q(recipient_location_country_name="UNITED STATES")
        pop_scope_q = Q(pop_country_code="USA") | Q(pop_country_name="UNITED STATES")
    Severity: Minor
    Found in usaspending_api/awards/v2/filters/search.py - About 3 days to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    File load_transactions_in_delta.py has 1270 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    import copy
    import logging
    import re
    
    from contextlib import contextmanager
    Severity: Major
    Found in usaspending_api/etl/management/commands/load_transactions_in_delta.py - About 3 days to fix

      File conftest_spark.py has 1194 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      import json
      import logging
      import uuid
      from datetime import datetime
      from pathlib import Path
      Severity: Major
      Found in usaspending_api/tests/conftest_spark.py - About 3 days to fix

        Function subaward_filter has a Cognitive Complexity of 132 (exceeds 15 allowed). Consider refactoring.
        Open

        def subaward_filter(filters, for_downloads=False):
            queryset = SubawardSearch.objects.all()
        
            recipient_scope_q = Q(sub_legal_entity_country_code="USA") | Q(sub_legal_entity_country_name="UNITED STATES")
            pop_scope_q = Q(sub_place_of_perform_country_co="USA") | Q(sub_place_of_perform_country_name="UNITED STATES")
        Severity: Minor
        Found in usaspending_api/awards/v2/filters/sub_award.py - About 2 days to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        File download_annotation_functions.py has 863 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        import datetime
        from typing import List, Optional
        
        from django.db.models.functions import Cast, Coalesce
        from django.db.models import (
        Severity: Major
        Found in usaspending_api/download/helpers/download_annotation_functions.py - About 2 days to fix

          Function post has a Cognitive Complexity of 110 (exceeds 15 allowed). Consider refactoring.
          Open

              def post(self, request, pk, format=None):
                  # create response
                  response = {"results": {}}
          
                  # get federal account id from url
          Severity: Minor
          Found in usaspending_api/accounts/views/federal_accounts_v2.py - About 2 days to fix

          Cognitive Complexity

          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

          A method's cognitive complexity is based on a few simple rules:

          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
          • Code is considered more complex for each "break in the linear flow of the code"
          • Code is considered more complex when "flow breaking structures are nested"

          Further reading

          File sqs_work_dispatcher.py has 825 lines of code (exceeds 250 allowed). Consider refactoring.
          Open

          import logging
          import inspect
          import json
          import os
          
          
          Severity: Major
          Found in usaspending_api/common/sqs/sqs_work_dispatcher.py - About 1 day to fix

            Cyclomatic complexity is too high in function get_business_categories_fpds. (104)
            Open

            def get_business_categories_fpds(row):
                # This function is supposed to be invoked as a Spark UDF with a named_struct containing the necessary
                # fields passed to it.
            
                def row_get(row, col_name):

            Cyclomatic Complexity

            Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

            Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

            Construct Effect on CC Reasoning
            if +1 An if statement is a single decision.
            elif +1 The elif statement adds another decision.
            else +0 The else statement does not cause a new decision. The decision is at the if.
            for +1 There is a decision at the start of the loop.
            while +1 There is a decision at the while statement.
            except +1 Each except branch adds a new conditional path of execution.
            finally +0 The finally block is unconditionally executed.
            with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
            assert +1 The assert statement internally roughly equals a conditional statement.
            Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
            Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

            Source: http://radon.readthedocs.org/en/latest/intro.html

            Identical blocks of code found in 2 locations. Consider refactoring.
            Open

            class Timer:
                def __enter__(self):
                    self.start = time.perf_counter()
                    return self
            
            
            usaspending_api/database_scripts/job_archive/backfill_solicitation_date.py on lines 75..96

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 199.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Identical blocks of code found in 2 locations. Consider refactoring.
            Open

            class Timer:
                def __enter__(self):
                    self.start = time.perf_counter()
                    return self
            
            
            usaspending_api/database_scripts/job_archive/backfill_per_transaction_exec_comp.py on lines 179..200

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 199.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Function validate_post_request has a Cognitive Complexity of 96 (exceeds 15 allowed). Consider refactoring.
            Open

                def validate_post_request(self, request):
                    if "filters" in request:
                        for filt in request["filters"]:
                            if "combine_method" in filt:
                                try:
            Severity: Minor
            Found in usaspending_api/common/api_request_utils.py - About 1 day to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            File request_validations.py has 716 lines of code (exceeds 250 allowed). Consider refactoring.
            Open

            import json
            from copy import deepcopy
            from datetime import datetime, MINYEAR, MAXYEAR
            from django.conf import settings
            from typing import Optional
            Severity: Major
            Found in usaspending_api/download/v2/request_validations.py - About 1 day to fix

              Identical blocks of code found in 2 locations. Consider refactoring.
              Open

              def run_spending_update_query(transaction_sql, transaction_type, broker_data):
                  with spending_connection.cursor() as update_cursor:
                      update_query = build_spending_update_query(transaction_sql, broker_data)
                      with Timer() as t:
                          update_cursor.execute(update_query, [col for row in broker_data for col in row])
              usaspending_api/database_scripts/job_archive/backfill_per_transaction_exec_comp.py on lines 224..237

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 180.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Identical blocks of code found in 2 locations. Consider refactoring.
              Open

              def run_spending_update_query(transaction_sql, transaction_type, broker_data):
                  with spending_connection.cursor() as update_cursor:
                      update_query = build_spending_update_query(transaction_sql, broker_data)
                      with Timer() as t:
                          update_cursor.execute(update_query, [col for row in broker_data for col in row])
              usaspending_api/database_scripts/job_archive/backfill_solicitation_date.py on lines 120..133

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 180.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              Cyclomatic complexity is too high in function matview_search_filter. (84)
              Open

              def matview_search_filter(filters, model, for_downloads=False):
                  queryset = model.objects.all()
              
                  recipient_scope_q = Q(recipient_location_country_code="USA") | Q(recipient_location_country_name="UNITED STATES")
                  pop_scope_q = Q(pop_country_code="USA") | Q(pop_country_name="UNITED STATES")

              Cyclomatic Complexity

              Cyclomatic Complexity corresponds to the number of decisions a block of code contains plus 1. This number (also called McCabe number) is equal to the number of linearly independent paths through the code. This number can be used as a guide when testing conditional logic in blocks.

              Radon analyzes the AST tree of a Python program to compute Cyclomatic Complexity. Statements have the following effects on Cyclomatic Complexity:

              Construct Effect on CC Reasoning
              if +1 An if statement is a single decision.
              elif +1 The elif statement adds another decision.
              else +0 The else statement does not cause a new decision. The decision is at the if.
              for +1 There is a decision at the start of the loop.
              while +1 There is a decision at the while statement.
              except +1 Each except branch adds a new conditional path of execution.
              finally +0 The finally block is unconditionally executed.
              with +1 The with statement roughly corresponds to a try/except block (see PEP 343 for details).
              assert +1 The assert statement internally roughly equals a conditional statement.
              Comprehension +1 A list/set/dict comprehension of generator expression is equivalent to a for loop.
              Boolean Operator +1 Every boolean operator (and, or) adds a decision point.

              Source: http://radon.readthedocs.org/en/latest/intro.html

              File swap_in_new_table.py has 672 lines of code (exceeds 250 allowed). Consider refactoring.
              Open

              import json
              import logging
              import re
              
              from datetime import datetime
              Severity: Major
              Found in usaspending_api/etl/management/commands/swap_in_new_table.py - About 1 day to fix

                Function get_business_categories_fpds has a Cognitive Complexity of 83 (exceeds 15 allowed). Consider refactoring.
                Open

                def get_business_categories_fpds(row):
                    # This function is supposed to be invoked as a Spark UDF with a named_struct containing the necessary
                    # fields passed to it.
                
                    def row_get(row, col_name):
                Severity: Minor
                Found in usaspending_api/broker/helpers/get_business_categories.py - About 1 day to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                File spark.py has 639 lines of code (exceeds 250 allowed). Consider refactoring.
                Open

                """
                Spark utility functions that could be used as stages or steps of an ETL job (aka "data pipeline")
                
                NOTE: This is distinguished from the usaspending_api.common.helpers.spark_helpers module, which holds mostly boilerplate
                functions for setup and configuration of the spark environment
                Severity: Major
                Found in usaspending_api/common/etl/spark.py - About 1 day to fix

                  Function build_elasticsearch_result has a Cognitive Complexity of 81 (exceeds 15 allowed). Consider refactoring.
                  Open

                      def build_elasticsearch_result(self, response: dict) -> Dict[str, dict]:
                          results = {}
                          geo_info_buckets = response.get("group_by_agg_key", {}).get("buckets", [])
                  
                          for bucket in geo_info_buckets:
                  Severity: Minor
                  Found in usaspending_api/disaster/v2/views/spending_by_geography.py - About 1 day to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Severity
                  Category
                  Status
                  Source
                  Language