edgewall/trac

View on GitHub

Showing 1,372 of 1,372 total issues

Function _render_comment_diff has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
Open

    def _render_comment_diff(self, req, ticket, data, cnum):
        """Show differences between two versions of a ticket comment."""
        req.perm(ticket.resource).require('TICKET_VIEW')
        new_version = req.args.getint('version', 1)
        old_version = req.args.getint('old_version', new_version)
Severity: Minor
Found in trac/ticket/web_ui.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function _to_users has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
Open

    def _to_users(self, users_perms_and_groups, ticket):
        """Finds all users contained in the list of `users_perms_and_groups`
        by recursive lookup of users when a `group` is encountered.
        """
        ps = PermissionSystem(self.env)
Severity: Minor
Found in trac/ticket/default_workflow.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function process_request has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
Open

    def process_request(self, req):
        milestone_id = req.args.get('id')
        action = req.args.get('action', 'view')
        if not milestone_id and action == 'view':
            req.redirect(req.href.roadmap())
Severity: Minor
Found in trac/ticket/roadmap.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function get_copy_ancestry has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
Open

    def get_copy_ancestry(self):
        """Retrieve the list of `(path,rev)` copy ancestors of this node.
        Most recent ancestor first. Each ancestor `(path, rev)` corresponds
        to the path and revision of the source at the time the copy or move
        operation was performed.
Severity: Minor
Found in tracopt/versioncontrol/svn/svn_fs.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function _get_tags_or_branches has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
Open

    def _get_tags_or_branches(self, paths):
        """Retrieve known branches or tags."""
        for path in self.params.get(paths, []):
            if path.endswith('*'):
                folder = posixpath.dirname(path)
Severity: Minor
Found in tracopt/versioncontrol/svn/svn_fs.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function __init__ has a Cognitive Complexity of 17 (exceeds 5 allowed). Consider refactoring.
Open

    def __init__(self, repos, path, rev, log, ls_tree_info=None,
                 historian=None):
        self.log = log
        self.repos = repos
        self.fs_sha = None # points to either tree or blobs
Severity: Minor
Found in tracopt/versioncontrol/git/git_fs.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

        if nsub:
            print("reverted %d ignorable changes in %s" % (nsub, path))
            f.seek(0)
            f.write(sanitized)
            f.truncate()
Severity: Major
Found in contrib/l10n_revert_lineno_conflicts.py and 1 other location - About 2 hrs to fix
contrib/l10n_reset_en_GB.py on lines 58..64

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 53.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    def get_ticket_actions(self, req, ticket):
        actions = []
        if ticket.exists and 'TICKET_STATUSFIX' in req.perm(ticket.resource):
            actions.append((0, 'force_status'))
        return actions
Severity: Major
Found in sample-plugins/workflow/StatusFixer.py and 1 other location - About 2 hrs to fix
sample-plugins/workflow/DeleteTicket.py on lines 46..50

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 53.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

        if nsub:
            print("reset %d messages to en_US in %s" % (nsub, path))
            f.seek(0)
            f.write(sanitized)
            f.truncate()
Severity: Major
Found in contrib/l10n_reset_en_GB.py and 1 other location - About 2 hrs to fix
contrib/l10n_revert_lineno_conflicts.py on lines 58..64

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 53.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

    def get_ticket_actions(self, req, ticket):
        actions = []
        if ticket.exists and 'TICKET_DELETE' in req.perm(ticket.resource):
            actions.append((0, 'delete'))
        return actions
Severity: Major
Found in sample-plugins/workflow/DeleteTicket.py and 1 other location - About 2 hrs to fix
sample-plugins/workflow/StatusFixer.py on lines 50..54

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 53.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

SQLiteConnection has 21 functions (exceeds 20 allowed). Consider refactoring.
Open

class SQLiteConnection(ConnectionBase, ConnectionWrapper):
    """Connection wrapper for SQLite."""

    __slots__ = ['_active_cursors', '_eager']

Severity: Minor
Found in trac/db/sqlite_backend.py - About 2 hrs to fix

    PostgreSQLConnection has 21 functions (exceeds 20 allowed). Consider refactoring.
    Open

    class PostgreSQLConnection(ConnectionBase, ConnectionWrapper):
        """Connection wrapper for PostgreSQL."""
    
        poolable = True
    
    
    Severity: Minor
    Found in trac/db/postgres_backend.py - About 2 hrs to fix

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

              try:
                  p = Popen(args, env=environ, stderr=PIPE, close_fds=close_fds)
              except OSError as e:
                  raise TracError(_("Unable to run %(path)s: %(msg)s",
                                    path=self.pg_dump_path,
      Severity: Major
      Found in trac/db/postgres_backend.py and 1 other location - About 2 hrs to fix
      trac/db/mysql_backend.py on lines 269..274

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 52.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Identical blocks of code found in 2 locations. Consider refactoring.
      Open

              milestones = [m for m in Milestone.select(self.env)
                            if m.name != milestone.name
                            and 'MILESTONE_VIEW' in req.perm(m.resource)]
      Severity: Major
      Found in trac/ticket/roadmap.py and 1 other location - About 2 hrs to fix
      trac/ticket/roadmap.py on lines 978..980

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 52.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Identical blocks of code found in 2 locations. Consider refactoring.
      Open

                  milestones = [m for m in Milestone.select(self.env)
                                if m.name != milestone.name
                                and 'MILESTONE_VIEW' in req.perm(m.resource)]
      Severity: Major
      Found in trac/ticket/roadmap.py and 1 other location - About 2 hrs to fix
      trac/ticket/roadmap.py on lines 953..955

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 52.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      Similar blocks of code found in 2 locations. Consider refactoring.
      Open

              try:
                  p = Popen(args, env=environ, stderr=PIPE, close_fds=close_fds)
              except OSError as e:
                  raise TracError(_("Unable to run %(path)s: %(msg)s",
                                    path=self.mysqldump_path,
      Severity: Major
      Found in trac/db/mysql_backend.py and 1 other location - About 2 hrs to fix
      trac/db/postgres_backend.py on lines 279..284

      Duplicated Code

      Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

      Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

      Tuning

      This issue has a mass of 52.

      We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

      The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

      If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

      See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

      Refactorings

      Further Reading

      File trac.js has 254 lines of code (exceeds 250 allowed). Consider refactoring.
      Open

      (function($){
      
        if (typeof _ == 'undefined')
          babel.Translations.load({}).install();
      
      
      Severity: Minor
      Found in trac/htdocs/js/trac.js - About 2 hrs to fix

        Function pre_process_request has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

            def pre_process_request(self, req, handler):
                if handler is not Chrome(self.env):
                    for repo_info in self.get_all_repositories().values():
                        if not as_bool(repo_info.get('sync_per_request')):
                            continue
        Severity: Minor
        Found in trac/versioncontrol/api.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function get_repositories has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

            def get_repositories(self):
                """Retrieve repositories specified in TracIni.
        
                The `[repositories]` section can be used to specify a list
                of repositories.
        Severity: Minor
        Found in trac/versioncontrol/api.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function format has a Cognitive Complexity of 16 (exceeds 5 allowed). Consider refactoring.
        Open

            def format(self, text, out, max_depth=6, min_depth=1, shorten=True):
                self.shorten = shorten
                whitespace_indent = '  '
                self.outline = []
                Formatter.format(self, text)
        Severity: Minor
        Found in trac/wiki/formatter.py - About 2 hrs to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Severity
        Category
        Status
        Source
        Language