OCA/openupgradelib

View on GitHub

Showing 163 of 163 total issues

Similar blocks of code found in 2 locations. Consider refactoring.
Open

def replace_html_replacement_class_rp_by_inline_shortcut(
    class_rp_by_inline="", **kwargs
):
    """Shortcut to replace an attribute spec.

Severity: Major
Found in openupgradelib/openupgrade_tools.py and 1 other location - About 4 hrs to fix
openupgradelib/openupgrade_tools.py on lines 331..352

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 79.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

def replace_html_replacement_attr_shortcut(attr_rp="", **kwargs):
    """Shortcut to replace an attribute spec.

    :param dict attr_rp:
        EX: {'data-toggle': 'data-bs-toggle'}
Severity: Major
Found in openupgradelib/openupgrade_tools.py and 1 other location - About 4 hrs to fix
openupgradelib/openupgrade_tools.py on lines 305..328

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 79.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function stemWord has 109 lines of code (exceeds 25 allowed). Consider refactoring.
Open

  this.stemWord = function (w) {
    var stem;
    var suffix;
    var firstch;
    var origword = w;
Severity: Major
Found in docs/_static/language_data.js - About 4 hrs to fix

    Function merge_records has a Cognitive Complexity of 28 (exceeds 5 allowed). Consider refactoring.
    Open

    def merge_records(
        env,
        model_name,
        record_ids,
        target_record_id,
    Severity: Minor
    Found in openupgradelib/openupgrade_merge_records.py - About 4 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function move_field_m2o has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
    Open

    def move_field_m2o(
        cr,
        pool,
        registry_old_model,
        field_old_model,
    Severity: Minor
    Found in openupgradelib/openupgrade.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _get_existing_records has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
    Open

    def _get_existing_records(cr, fp, module_name):
        """yield file like objects per 'leaf' node in the xml file that exists.
        This is for not trying to create a record with partial data in case the
        record was removed in the database."""
    
    
    Severity: Minor
    Found in openupgradelib/openupgrade.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function migrate has a Cognitive Complexity of 26 (exceeds 5 allowed). Consider refactoring.
    Open

    def migrate(no_version=False, use_env=None, uid=None, context=None):
        """
        This is the decorator for the migrate() function
        in migration scripts.
    
    
    Severity: Minor
    Found in openupgradelib/openupgrade.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _adjust_merged_values_orm has a Cognitive Complexity of 25 (exceeds 5 allowed). Consider refactoring.
    Open

    def _adjust_merged_values_orm(
        env, model_name, record_ids, target_record_id, field_spec
    ):
        """This method deals with the values on the records to be merged +
        the target record, performing operations that make sense on the meaning
    Severity: Minor
    Found in openupgradelib/openupgrade_merge_records.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Consider simplifying this complex logical expression.
    Open

        if field_type in ("char", "text", "html"):
            if not operation:
                operation = "other" if field_type == "char" else "merge"
            if operation == "first_not_null":
                field_vals = [x for x in field_vals if x]
    Severity: Critical
    Found in openupgradelib/openupgrade_merge_records.py - About 3 hrs to fix

      Function load_data has a Cognitive Complexity of 23 (exceeds 5 allowed). Consider refactoring.
      Open

      def load_data(env_or_cr, module_name, filename, idref=None, mode="init"):
          """
          Load an xml, csv or yml data file from your post script. The usual case for
          this is the
          occurrence of newly added essential or useful data in the module that is
      Severity: Minor
      Found in openupgradelib/openupgrade.py - About 3 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function convert_binary_field_to_attachment has a Cognitive Complexity of 23 (exceeds 5 allowed). Consider refactoring.
      Open

      def convert_binary_field_to_attachment(env, field_spec):
          """This method converts the 8.0 binary fields to attachments like Odoo 9.0
          makes with the new attachment=True attribute. It has to be called on
          post-migration script, as there's a call to get the res_name of the
          target model, which is not yet loaded on pre-migration.
      Severity: Minor
      Found in openupgradelib/openupgrade_90.py - About 3 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      File openupgrade_tools.py has 300 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      # -*- coding: utf-8 -*- # pylint: disable=C8202
      ##############################################################################
      #
      #    OpenERP, Open Source Management Solution
      #    This module copyright (C) 2012-2014 Therp BV (<http://therp.nl>)
      Severity: Minor
      Found in openupgradelib/openupgrade_tools.py - About 3 hrs to fix

        Function delete_record_translations has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
        Open

        def delete_record_translations(cr, module, xml_ids, field_list=None):
            """Cleanup translations of specific records in a module.
        
            :param module: module name
            :param xml_ids: a tuple or list of xml record IDs
        Severity: Minor
        Found in openupgradelib/openupgrade.py - About 3 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function performTermsSearch has 76 lines of code (exceeds 25 allowed). Consider refactoring.
        Open

          performTermsSearch: (searchTerms, excludedTerms) => {
            // prepare search
            const terms = Search._index.terms;
            const titleTerms = Search._index.titleterms;
            const filenames = Search._index.filenames;
        Severity: Major
        Found in docs/_static/searchtools.js - About 3 hrs to fix

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

              if (re.test(w)) {
                var fp = re.exec(w);
                stem = fp[1];
                suffix = fp[2];
                re = new RegExp(mgr0);
          Severity: Major
          Found in docs/_static/language_data.js and 1 other location - About 2 hrs to fix
          docs/_static/language_data.js on lines 130..137

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 92.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Similar blocks of code found in 2 locations. Consider refactoring.
          Open

              if (re.test(w)) {
                var fp = re.exec(w);
                stem = fp[1];
                suffix = fp[2];
                re = new RegExp(mgr0);
          Severity: Major
          Found in docs/_static/language_data.js and 1 other location - About 2 hrs to fix
          docs/_static/language_data.js on lines 141..148

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 92.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Function _performSearch has 68 lines of code (exceeds 25 allowed). Consider refactoring.
          Open

            _performSearch: (query, searchTerms, excludedTerms, highlightTerms, objectTerms) => {
              const filenames = Search._index.filenames;
              const docNames = Search._index.docnames;
              const titles = Search._index.titles;
              const allTitles = Search._index.alltitles;
          Severity: Major
          Found in docs/_static/searchtools.js - About 2 hrs to fix

            Function rename_xmlids has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
            Open

            def rename_xmlids(cr, xmlids_spec, allow_merge=False):
                """
                Rename XML IDs. Typically called in the pre script.
                One usage example is when an ID changes module. In OpenERP 6 for example,
                a number of res_groups IDs moved to module base from other modules (
            Severity: Minor
            Found in openupgradelib/openupgrade.py - About 2 hrs to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                    if (!terms.hasOwnProperty(word)) {
                      Object.keys(terms).forEach((term) => {
                        if (term.match(escapedWord))
                          arr.push({ files: terms[term], score: Scorer.partialTerm });
                      });
            Severity: Major
            Found in docs/_static/searchtools.js and 1 other location - About 2 hrs to fix
            docs/_static/searchtools.js on lines 529..534

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 84.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                    if (!titleTerms.hasOwnProperty(word)) {
                      Object.keys(titleTerms).forEach((term) => {
                        if (term.match(escapedWord))
                          arr.push({ files: titleTerms[term], score: Scorer.partialTitle });
                      });
            Severity: Major
            Found in docs/_static/searchtools.js and 1 other location - About 2 hrs to fix
            docs/_static/searchtools.js on lines 523..528

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 84.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Severity
            Category
            Status
            Source
            Language