resource-watch/aqueduct-analysis-microservice

View on GitHub

Showing 49 of 49 total issues

File food_supply_chain_service.py has 693 lines of code (exceeds 250 allowed). Consider refactoring.
Open

#!/usr/local/bin/python
"""
Name: AqFood_Supply_Chain_Analyzer.py
Project: Aqueduct Food Tool Enhancements
Description:
Severity: Major
Found in aqueduct/services/food_supply_chain_service.py - About 1 day to fix

    File cba_service.py has 675 lines of code (exceeds 250 allowed). Consider refactoring.
    Open

    import datetime
    import logging
    import os
    import sys, traceback
    
    
    Severity: Major
    Found in aqueduct/services/cba_service.py - About 1 day to fix

      File risk_service.py has 497 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      import logging
      import os
      
      import numpy as np
      import pandas as pd
      Severity: Minor
      Found in aqueduct/services/risk_service.py - About 7 hrs to fix

        File ps_router.py has 459 lines of code (exceeds 250 allowed). Consider refactoring.
        Open

        """API ROUTER"""
        
        from __future__ import absolute_import
        from __future__ import division
        from __future__ import print_function
        Severity: Minor
        Found in aqueduct/routes/api/v1/ps_router.py - About 7 hrs to fix

          File validators.py has 290 lines of code (exceeds 250 allowed). Consider refactoring.
          Open

          """VALIDATORS"""
          import json
          import logging
          from functools import wraps
          
          
          Severity: Minor
          Found in aqueduct/validators.py - About 2 hrs to fix

            Function find_locations has 68 lines of code (exceeds 25 allowed). Consider refactoring.
            Open

                def find_locations(self, water_unit):
                    """
                    :param water_unit: Geometry ID associated with the selected indicator. AQID for aquifers or PFAF_ID for watersheds
                    :return: dataframe with that matches each supply location to its watersheds and crops (close to final product); dataframe with all errors logged
                    """
            Severity: Major
            Found in aqueduct/services/food_supply_chain_service.py - About 2 hrs to fix

              Similar blocks of code found in 2 locations. Consider refactoring.
              Open

                      ad0_ids = df_adm[["row", "GID_0"]][
                          (df_adm["Select_By"] == "country") & (~df_adm["Country_clean"].isna())
              Severity: Major
              Found in aqueduct/services/food_supply_chain_service.py and 1 other location - About 2 hrs to fix
              aqueduct/services/food_supply_chain_service.py on lines 786..787

              Duplicated Code

              Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

              Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

              When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

              Tuning

              This issue has a mass of 58.

              We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

              The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

              If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

              See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

              Refactorings

              Further Reading

              RiskService has 24 functions (exceeds 20 allowed). Consider refactoring.
              Open

              class RiskService(object):
                  def __init__(self, user_selections):
                      # DB Connection
                      self.engine = sqlalchemy.create_engine(os.getenv('POSTGRES_URL'))
                      self.metadata = sqlalchemy.MetaData(bind=self.engine)
              Severity: Minor
              Found in aqueduct/services/risk_service.py - About 2 hrs to fix

                Similar blocks of code found in 2 locations. Consider refactoring.
                Open

                        ad1_ids = df_ad1m[["row", "GID_1"]][
                            (df_ad1m["Select_By"] == "state") & (~df_ad1m["State_clean"].isna())
                Severity: Major
                Found in aqueduct/services/food_supply_chain_service.py and 1 other location - About 2 hrs to fix
                aqueduct/services/food_supply_chain_service.py on lines 739..740

                Duplicated Code

                Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                Tuning

                This issue has a mass of 58.

                We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                Refactorings

                Further Reading

                Function calc_risk has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
                Open

                    def calc_risk(self):
                        """
                        Purpose: Runs analysis on the fly instead of using precalcuted results
                        (For when users define current protection level, find annual impact themselves)
                        Output:
                Severity: Minor
                Found in aqueduct/services/risk_service.py - About 2 hrs to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function average_prot has a Cognitive Complexity of 14 (exceeds 5 allowed). Consider refactoring.
                Open

                    def average_prot(self, m, year, risk_data_input):
                        #logging.debug('[CBA, average_prot]: start')
                        idx = int(year) - self.implementation_start
                        #logging.debug(f'[CBA, average_prot, idx]: {idx} ==> {year} {self.implementation_start}')
                        clm = "histor" if year == '2010' else self.clim
                Severity: Minor
                Found in aqueduct/services/cba_service.py - About 1 hr to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function __init__ has 43 lines of code (exceeds 25 allowed). Consider refactoring.
                Open

                    def __init__(self, user_selections):
                        ### DBConexion
                        self.engine = sqlalchemy.create_engine(os.getenv('POSTGRES_URL'))
                        self.metadata = sqlalchemy.MetaData(bind=self.engine)
                        self.metadata.reflect(self.engine)
                Severity: Minor
                Found in aqueduct/services/cba_service.py - About 1 hr to fix

                  Identical blocks of code found in 3 locations. Consider refactoring.
                  Open

                      s3 = session.client(
                           service_name='s3',
                           aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
                           aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY"),
                           endpoint_url=os.environ.get("ENDPOINT_URL")
                  Severity: Major
                  Found in aqueduct/workers/supply-chain-worker.py and 2 other locations - About 1 hr to fix
                  aqueduct/services/food_supply_chain_service.py on lines 524..529
                  aqueduct/services/food_supply_chain_service.py on lines 969..973

                  Duplicated Code

                  Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                  Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                  When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                  Tuning

                  This issue has a mass of 46.

                  We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                  The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                  If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                  See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                  Refactorings

                  Further Reading

                  Identical blocks of code found in 3 locations. Consider refactoring.
                  Open

                          s3 = session.client(
                              service_name="s3",
                              aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
                              aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY"),
                              endpoint_url=os.environ.get("ENDPOINT_URL"),
                  Severity: Major
                  Found in aqueduct/services/food_supply_chain_service.py and 2 other locations - About 1 hr to fix
                  aqueduct/services/food_supply_chain_service.py on lines 524..529
                  aqueduct/workers/supply-chain-worker.py on lines 13..17

                  Duplicated Code

                  Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                  Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                  When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                  Tuning

                  This issue has a mass of 46.

                  We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                  The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                  If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                  See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                  Refactorings

                  Further Reading

                  Identical blocks of code found in 3 locations. Consider refactoring.
                  Open

                          if os.environ.get("ENDPOINT_URL"):
                              s3 = session.client(
                                  service_name="s3",
                                  aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
                                  aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY"),
                  Severity: Major
                  Found in aqueduct/services/food_supply_chain_service.py and 2 other locations - About 1 hr to fix
                  aqueduct/services/food_supply_chain_service.py on lines 969..973
                  aqueduct/workers/supply-chain-worker.py on lines 13..17

                  Duplicated Code

                  Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                  Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                  When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                  Tuning

                  This issue has a mass of 46.

                  We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                  The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                  If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                  See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                  Refactorings

                  Further Reading

                  Function prepare_payload has a Cognitive Complexity of 13 (exceeds 5 allowed). Consider refactoring.
                  Open

                      def prepare_payload(self, payload):
                          new_payload = {}
                  
                          for key in payload:
                              new_key = self.output_lookup.get(key)
                  Severity: Minor
                  Found in aqueduct/services/food_supply_chain_service.py - About 1 hr to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Function analyze has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
                  Open

                  def analyze(**kwargs):
                      """Analyze water risk data
                      ---
                      get:
                          summary: Allow  water risk atlas analysis. Pasing this params as 'application/json' on a Post
                  Severity: Minor
                  Found in aqueduct/routes/api/v1/ps_router.py - About 1 hr to fix

                  Cognitive Complexity

                  Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                  A method's cognitive complexity is based on a few simple rules:

                  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                  • Code is considered more complex for each "break in the linear flow of the code"
                  • Code is considered more complex when "flow breaking structures are nested"

                  Further reading

                  Function get_table has 11 arguments (exceeds 4 allowed). Consider refactoring.
                  Open

                      def get_table(points, analysis_type, wscheme, month, year, change_type, indicator, scenario, 
                  Severity: Major
                  Found in aqueduct/services/carto_service.py - About 1 hr to fix

                    Function get_supply_chain_analysis has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
                    Open

                    def get_supply_chain_analysis(user_indicator, threshold, **kwargs):
                        try:
                            # check if the post request has the file part
                            if "data" not in request.files:
                                logging.error("[ROUTER]: No input file provided")
                    Severity: Minor
                    Found in aqueduct/routes/api/v1/ps_router.py - About 1 hr to fix

                    Cognitive Complexity

                    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                    A method's cognitive complexity is based on a few simple rules:

                    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                    • Code is considered more complex for each "break in the linear flow of the code"
                    • Code is considered more complex when "flow breaking structures are nested"

                    Further reading

                    Function analyze has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
                    Open

                        def analyze(self):
                            # allStartTime = time.time()
                            ##--------------------------------------------------------
                            ###              ANALYSIS          ###
                            ##--------------------------------------------------------
                    Severity: Minor
                    Found in aqueduct/services/cba_service.py - About 1 hr to fix

                    Cognitive Complexity

                    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                    A method's cognitive complexity is based on a few simple rules:

                    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                    • Code is considered more complex for each "break in the linear flow of the code"
                    • Code is considered more complex when "flow breaking structures are nested"

                    Further reading

                    Severity
                    Category
                    Status
                    Source
                    Language