ingadhoc/odoo-argentina

View on GitHub

Showing 180 of 180 total issues

Identical blocks of code found in 2 locations. Consider refactoring.
Open

            try:
                ws.ConstatarComprobante(
                    cbte_modo, cuit_emisor, pto_vta, cbte_tipo, cbte_nro,
                    cbte_fch, imp_total, cod_autorizacion, doc_tipo_receptor,
                    doc_nro_receptor)
Severity: Major
Found in l10n_ar_afipws_fe/models/invoice.py and 1 other location - About 4 hrs to fix
l10n_ar_afipws_fe/models/invoice.py on lines 580..603

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 76.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

            try:
                if afip_ws == 'wsfe':
                    ws.CAESolicitar()
                    vto = ws.Vencimiento
                elif afip_ws == 'wsmtxca':
Severity: Major
Found in l10n_ar_afipws_fe/models/invoice.py and 1 other location - About 4 hrs to fix
l10n_ar_afipws_fe/models/invoice.py on lines 296..314

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 76.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function get_withholding_vals has a Cognitive Complexity of 27 (exceeds 5 allowed). Consider refactoring.
Open

    def get_withholding_vals(self, voucher):
        vals = super(AccountTaxWithholding, self).get_withholding_vals(
            voucher)
        base_amount = vals['withholdable_base_amount']
        if self.type == 'arba_ws':
Severity: Minor
Found in l10n_ar_account_withholding/models/account_tax_withholding.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

                    self.format_amount(
                        sum(inv.tax_line.filtered(
                            lambda r: r.tax_code_id.type == 'perception' and r.tax_code_id.tax == 'vat' and r.tax_code_id.application == 'national_taxes').mapped(
Severity: Major
Found in l10n_ar_account_vat_ledger_city/models/account_vat_report.py and 1 other location - About 3 hrs to fix
l10n_ar_account_vat_ledger_city/models/account_vat_report.py on lines 294..296

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 71.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Identical blocks of code found in 2 locations. Consider refactoring.
Open

                    self.format_amount(
                        sum(inv.tax_line.filtered(
                            lambda r: r.tax_code_id.type == 'perception' and r.tax_code_id.tax == 'vat' and r.tax_code_id.application == 'national_taxes').mapped(
Severity: Major
Found in l10n_ar_account_vat_ledger_city/models/account_vat_report.py and 1 other location - About 3 hrs to fix
l10n_ar_account_vat_ledger_city/models/account_vat_report.py on lines 308..310

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 71.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Function refres_currency_with_afip_ws has a Cognitive Complexity of 24 (exceeds 5 allowed). Consider refactoring.
Open

    def refres_currency_with_afip_ws(self):
        """
        TODO: no reescribir este metodo si no que heredar de una mejor manera
        el original
        """
Severity: Minor
Found in l10n_ar_currency_update/models/currency_rate_update.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function _get_vals has a Cognitive Complexity of 24 (exceeds 5 allowed). Consider refactoring.
Open

    def _get_vals(self, invoice_subtype=False):
        vals = {}
        domain = [
            ('company_id', '=', self.company_id.id),
            ]
Severity: Minor
Found in l10n_ar_invoice/wizard/account_journal_create_wizard.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function unify_geo_data has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
Open

def unify_geo_data(input_string):
    """
    Return unified geographic data

    >>> data = unify_geo_data("Av. rivadavia 9858, buenos aires, argentina")
Severity: Minor
Found in l10n_ar_bank/wizard/geosearch.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function ar_banks_iterator has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
Open

def ar_banks_iterator(
    url_bank_list='http://www.bcra.gov.ar/sisfin/sf010100.asp',
    url_bank_info='http://www.bcra.gov.ar/sisfin/sf010100.asp?bco=%s',
    country='Argentina'):
    """
Severity: Minor
Found in l10n_ar_bank/wizard/banks_def.py - About 3 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

File account_vat_report.py has 295 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# -*- coding: utf-8 -*-
##############################################################################
# For copyright and license notices, see __openerp__.py file in module root
# directory
##############################################################################
Severity: Minor
Found in l10n_ar_account_vat_ledger_city/models/account_vat_report.py - About 3 hrs to fix

    Function get_data_from_padron_afip has a Cognitive Complexity of 21 (exceeds 5 allowed). Consider refactoring.
    Open

        def get_data_from_padron_afip(self):
            self.ensure_one()
            cuit = self.document_number
            # GET COMPANY
            # if there is certificate for user company, use that one, if not
    Severity: Minor
    Found in l10n_ar_padron_afip/models/res_partner.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

                    self.format_amount(
                        sum(inv.tax_line.filtered(
                            lambda r: r.tax_code_id.type == 'perception' and r.tax_code_id.application == 'municipal_taxes').mapped(
    Severity: Major
    Found in l10n_ar_account_vat_ledger_city/models/account_vat_report.py and 1 other location - About 2 hrs to fix
    l10n_ar_account_vat_ledger_city/models/account_vat_report.py on lines 322..324

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 60.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

                    self.format_amount(
                        sum(inv.tax_line.filtered(
                            lambda r: r.tax_code_id.type == 'perception' and r.tax_code_id.application == 'provincial_taxes').mapped(
    Severity: Major
    Found in l10n_ar_account_vat_ledger_city/models/account_vat_report.py and 1 other location - About 2 hrs to fix
    l10n_ar_account_vat_ledger_city/models/account_vat_report.py on lines 328..330

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 60.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function get_followup_table_html has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring.
    Open

        def get_followup_table_html(self, cr, uid, ids, context=None):
            """ Build the html tables to be included in emails send to partners,
                when reminding them their overdue invoices.
                :param ids: [id] of the partner for whom we are building the tables
                :rtype: string
    Severity: Minor
    Found in l10n_ar_account_followup/account_followup.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function check_argentinian_invoice_taxes has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring.
    Open

        def check_argentinian_invoice_taxes(self):
            """
            We make theis function to be used as a constraint but also to be called
            from other models like vat citi
            """
    Severity: Minor
    Found in l10n_ar_invoice/models/invoice.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

        @api.one
        @api.constrains('point_of_sale_id', 'company_id')
        def _check_company_id(self):
            """
            Check point of sale and journal company
    Severity: Major
    Found in l10n_ar_invoice/models/account.py and 1 other location - About 2 hrs to fix
    l10n_ar_account_voucher/models/account_voucher.py on lines 110..118

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 55.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function change_partner has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring.
    Open

        def change_partner(self):
            self.ensure_one()
            self.field_ids.unlink()
            partner = self.partner_id
            fields_names = self.field_to_update_ids.mapped('name')
    Severity: Minor
    Found in l10n_ar_padron_afip/wizard/res_partner_update_from_padron_wizard.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function partner_address has a Cognitive Complexity of 18 (exceeds 5 allowed). Consider refactoring.
    Open

        def partner_address(self, partner, context=None):
            ret = ''
            if partner.street:
                ret += partner.street
            if partner.street2:
    Severity: Minor
    Found in l10n_ar_aeroo_base/parser.py - About 2 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

        @api.one
        @api.constrains('receiptbook_id', 'company_id')
        def _check_company_id(self):
            """
            Check receiptbook_id and voucher company
    Severity: Major
    Found in l10n_ar_account_voucher/models/account_voucher.py and 1 other location - About 2 hrs to fix
    l10n_ar_invoice/models/account.py on lines 405..412

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 55.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    account_invoice has 22 functions (exceeds 20 allowed). Consider refactoring.
    Open

    class account_invoice(models.Model):
        _inherit = "account.invoice"
        _order = "afip_document_number desc, number desc, id desc"
    
        state_id = fields.Many2one(
    Severity: Minor
    Found in l10n_ar_invoice/models/invoice.py - About 2 hrs to fix
      Severity
      Category
      Status
      Source
      Language