OCA/server-tools

View on GitHub

Showing 317 of 317 total issues

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    @api.model
    def create(self, values):
        # make sure the user trying this is actually supposed to do it
        if self.env.uid != SUPERUSER_ID and\
           not self.env.ref('database_cleanup.menu_database_cleanup')\
Severity: Major
Found in database_cleanup/model/purge_wizard.py and 1 other location - About 3 hrs to fix
database_cleanup/model/purge_wizard.py on lines 95..102

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 65.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function _get_data has a Cognitive Complexity of 23 (exceeds 5 allowed). Consider refactoring.
Open

    def _get_data(self):
        ram = 0
        cpu = 0
        if psutil:
            process = psutil.Process(os.getpid())
Severity: Minor
Found in dead_mans_switch_client/models/dead_mans_switch_client.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    @api.model
    def create(self, values):
        # make sure the user trying this is actually supposed to do it
        if self.env.uid != SUPERUSER_ID and\
           not self.env.ref('database_cleanup.menu_database_cleanup')\
Severity: Major
Found in database_cleanup/model/purge_wizard.py and 1 other location - About 3 hrs to fix
database_cleanup/model/purge_wizard.py on lines 41..48

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 65.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function action_populate has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
Open

    def action_populate(self, cr, uid, ids, context=None):
        """
        Prepopulate the user table from one or more LDAP resources.

        Obviously, the option to create users must be toggled in
Severity: Minor
Found in users_ldap_populate/models/users_ldap.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function imgs_from_html has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
Open

    def imgs_from_html(self, html_content, limit=None, fail=False):
        """Extract all images in order from an HTML field in a generator.

        :param str html_content:
            HTML contents from where to extract the images.
Severity: Minor
Found in html_image_url_extractor/models/ir_fields_converter.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 6 locations. Consider refactoring.
Open

    def default_get(self, cr, uid, fields, context=None):
        res = super(CleanupPurgeWizardModel, self).default_get(
            cr, uid, fields, context=context)
        if 'name' in fields:
            res['name'] = _('Purge models')
Severity: Major
Found in database_cleanup/model/purge_models.py and 5 other locations - About 3 hrs to fix
database_cleanup/model/purge_columns.py on lines 86..91
database_cleanup/model/purge_data.py on lines 57..62
database_cleanup/model/purge_menus.py on lines 48..53
database_cleanup/model/purge_modules.py on lines 101..106
database_cleanup/model/purge_tables.py on lines 93..98

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 62.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 6 locations. Consider refactoring.
Open

    def default_get(self, cr, uid, fields, context=None):
        res = super(CleanupPurgeWizardColumn, self).default_get(
            cr, uid, fields, context=context)
        if 'name' in fields:
            res['name'] = _('Purge columns')
Severity: Major
Found in database_cleanup/model/purge_columns.py and 5 other locations - About 3 hrs to fix
database_cleanup/model/purge_data.py on lines 57..62
database_cleanup/model/purge_menus.py on lines 48..53
database_cleanup/model/purge_models.py on lines 130..135
database_cleanup/model/purge_modules.py on lines 101..106
database_cleanup/model/purge_tables.py on lines 93..98

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 62.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 6 locations. Consider refactoring.
Open

    def default_get(self, cr, uid, fields, context=None):
        res = super(CleanupPurgeWizardTable, self).default_get(
            cr, uid, fields, context=context)
        if 'name' in fields:
            res['name'] = _('Purge tables')
Severity: Major
Found in database_cleanup/model/purge_tables.py and 5 other locations - About 3 hrs to fix
database_cleanup/model/purge_columns.py on lines 86..91
database_cleanup/model/purge_data.py on lines 57..62
database_cleanup/model/purge_menus.py on lines 48..53
database_cleanup/model/purge_models.py on lines 130..135
database_cleanup/model/purge_modules.py on lines 101..106

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 62.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 6 locations. Consider refactoring.
Open

    def default_get(self, cr, uid, fields, context=None):
        res = super(CleanupPurgeWizardModule, self).default_get(
            cr, uid, fields, context=context)
        if 'name' in fields:
            res['name'] = _('Purge modules')
Severity: Major
Found in database_cleanup/model/purge_modules.py and 5 other locations - About 3 hrs to fix
database_cleanup/model/purge_columns.py on lines 86..91
database_cleanup/model/purge_data.py on lines 57..62
database_cleanup/model/purge_menus.py on lines 48..53
database_cleanup/model/purge_models.py on lines 130..135
database_cleanup/model/purge_tables.py on lines 93..98

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 62.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 6 locations. Consider refactoring.
Open

    def default_get(self, cr, uid, fields, context=None):
        res = super(CleanupPurgeWizardMenu, self).default_get(
            cr, uid, fields, context=context)
        if 'name' in fields:
            res['name'] = _('Purge menus')
Severity: Major
Found in database_cleanup/model/purge_menus.py and 5 other locations - About 3 hrs to fix
database_cleanup/model/purge_columns.py on lines 86..91
database_cleanup/model/purge_data.py on lines 57..62
database_cleanup/model/purge_models.py on lines 130..135
database_cleanup/model/purge_modules.py on lines 101..106
database_cleanup/model/purge_tables.py on lines 93..98

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 62.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 6 locations. Consider refactoring.
Open

    def default_get(self, cr, uid, fields, context=None):
        res = super(CleanupPurgeWizardData, self).default_get(
            cr, uid, fields, context=context)
        if 'name' in fields:
            res['name'] = _('Purge data')
Severity: Major
Found in database_cleanup/model/purge_data.py and 5 other locations - About 3 hrs to fix
database_cleanup/model/purge_columns.py on lines 86..91
database_cleanup/model/purge_menus.py on lines 48..53
database_cleanup/model/purge_models.py on lines 130..135
database_cleanup/model/purge_modules.py on lines 101..106
database_cleanup/model/purge_tables.py on lines 93..98

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 62.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function _run_import has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
Open

    def _run_import(self, commit=True, commit_threshold=100):
        """Run the import as cronjob, commit often"""
        self.ensure_one()
        if not self.password:
            return
Severity: Minor
Found in base_import_odoo/models/import_odoo_database.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function _walk has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
Open

def _walk(top, exclude_patterns, keep_langs):
    keep_langs = {l.split('_')[0] for l in keep_langs}
    for dirpath, dirnames, filenames in os.walk(top):
        dirnames.sort()
        reldir = os.path.relpath(dirpath, top)
Severity: Minor
Found in module_auto_update/addon_hash.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function _run_import_create_dummy has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
Open

    def _run_import_create_dummy(
        self, context, model, record, forcecreate=False,
    ):
        """Either misuse some existing record or create an empty one to satisfy
        required links"""
Severity: Minor
Found in base_import_odoo/models/import_odoo_database.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function _revert_methods has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring.
Open

    def _revert_methods(self):
        """Restore original ORM methods of models defined in rules."""
        updated = False
        for rule in self:
            model_model = self.env[rule.model_id.model or rule.model_model]
Severity: Minor
Found in auditlog/models/rule.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

File restrict_field_access_mixin.py has 266 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# -*- coding: utf-8 -*-
# © 2016 Therp BV <http://therp.nl>
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
import json
from lxml import etree

    AuditlogRule has 22 functions (exceeds 20 allowed). Consider refactoring.
    Open

    class AuditlogRule(models.Model):
        _name = 'auditlog.rule'
        _description = "Auditlog - Rule"
    
        name = fields.Char(u"Name", size=32, required=True)
    Severity: Minor
    Found in auditlog/models/rule.py - About 2 hrs to fix

      Function attach_mail has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
      Open

          def attach_mail(self, connection, object_id, folder, mail_message, msgid):
              '''Return ids of messages created'''
      
              mail_message_ids = []
      
      
      Severity: Minor
      Found in fetchmail_attach_from_folder/model/fetchmail_server.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function fields_view_get has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
      Open

          def fields_view_get(
                  self, cr, uid, view_id=None, view_type='form', context=None,
                  toolbar=False, submenu=False):
              s_set = _("Set")
              s_add = _("Add")
      Severity: Minor
      Found in mass_editing/wizard/mass_editing_wizard.py - About 2 hrs to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      File db_backup.py has 260 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      # -*- coding: utf-8 -*-
      # © 2004-2009 Tiny SPRL (<http://tiny.be>).
      # © 2015 Agile Business Group <http://www.agilebg.com>
      # © 2016 Grupo ESOC Ingeniería de Servicios, S.L.U. - Jairo Llopis
      # License AGPL-3.0 or later (http://www.gnu.org/licenses/gpl.html).
      Severity: Minor
      Found in auto_backup/models/db_backup.py - About 2 hrs to fix
        Severity
        Category
        Status
        Source
        Language