edgewall/trac

View on GitHub

Showing 1,372 of 1,372 total issues

Function send_project_index has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

def send_project_index(environ, start_response, parent_dir=None,
                       env_paths=None):
    req = Request(environ, start_response)

    loadpaths = [pkg_resources.resource_filename('trac', 'templates')]
Severity: Minor
Found in trac/web/main.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function insert has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

    def insert(self, when=None):
        """Add ticket to database.
        """
        assert not self.exists, 'Cannot insert an existing ticket'

Severity: Minor
Found in trac/ticket/model.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function get_logo_data has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

    def get_logo_data(self, href, abs_href=None):
        # TODO: Possibly, links to 'common/' could use chrome.htdocs_location
        logo = {}
        logo_src = self.logo_src
        if logo_src:
Severity: Minor
Found in trac/web/chrome.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function read has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

    def read(self, sock):
        """Read and decode a Record from a socket."""
        try:
            header, length = self._recvall(sock, FCGI_HEADER_LEN)
        except:
Severity: Minor
Found in trac/web/_fcgi.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function shortrev has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

    def shortrev(self, rev, min_len=7):

        def get_shortrev(rev, min_len):
            """try to shorten sha id"""
            #try to emulate the following:
Severity: Minor
Found in tracopt/versioncontrol/git/PyGIT.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function history_relative_rev has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

    def history_relative_rev(self, sha, rel_pos):

        def get_history_relative_rev(sha, rel_pos):
            rev_dict = self.get_commits()

Severity: Minor
Found in tracopt/versioncontrol/git/PyGIT.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function main has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

def main():
    args = sys.argv[1:]
    if not args:
        sys.stderr.write('Usage: %s algorithm files...\n' % sys.argv[0])
        return 2
Severity: Minor
Found in contrib/checksum.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function do_upgrade has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

def do_upgrade(env, ver, cursor):
    """
    1. Zero-pad Subversion revision numbers in the cache.
    2. Remove wiki-macros directory.
    """
Severity: Minor
Found in trac/upgrades/db26.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function _check_quickjump has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

    def _check_quickjump(self, req, noquickjump, kwd):
        """Look for search shortcuts"""
        # Source quickjump  FIXME: delegate to ISearchSource.search_quickjump
        quickjump_href = None
        if kwd[0] == '/':
Severity: Minor
Found in trac/search/web_ui.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function get_repositories has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

    def get_repositories(self):
        """Retrieve repositories specified in a `projects_list` file."""
        if not self.projects_list:
            return

Severity: Minor
Found in tracopt/versioncontrol/git/git_fs.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function _cat_file_reader has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

    def _cat_file_reader(self, kind, sha):
        with self.__cat_file_pipe_lock:
            if self.__cat_file_pipe is None:
                self.__cat_file_pipe = self.repo.cat_file_batch()

Severity: Minor
Found in tracopt/versioncontrol/git/PyGIT.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function main has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

def main():
    parser = argparse.ArgumentParser(description="""\
        If no flags are given, both jinja and html checks will be performed.

        An alternative usage is to run the tool via make, i.e. `make jinja`,
Severity: Minor
Found in contrib/jinjachecker.py - About 1 hr to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function request has 31 lines of code (exceeds 25 allowed). Consider refactoring.
Open

    function request() {
      if (!updating) {
        var new_values = form.serializeArray();
        if (values_changed(new_values)) {
          values = new_values;
Severity: Minor
Found in trac/htdocs/js/auto_preview.js - About 1 hr to fix

    Function paintLogGraph has 31 lines of code (exceeds 25 allowed). Consider refactoring.
    Open

      $.paintLogGraph = function(graph, canvas) {
        var ctx = canvas.getContext('2d');
        ctx.scale(-canvas.width / graph.columns,
                  canvas.height / graph.vertices.length);
        ctx.translate(-graph.columns + 0.5, 0.5)
    Severity: Minor
    Found in trac/htdocs/js/log_graph.js - About 1 hr to fix

      Function __init__ has 31 lines of code (exceeds 25 allowed). Consider refactoring.
      Open

          def __init__(self, env, report=None, constraints=None, cols=None,
                       order=None, desc=0, group=None, groupdesc=0, verbose=0,
                       rows=None, page=None, max=None, format=None):
              self.env = env
              self.id = report  # if not None, it's the corresponding saved query
      Severity: Minor
      Found in trac/ticket/query.py - About 1 hr to fix

        Function process_request has 30 lines of code (exceeds 25 allowed). Consider refactoring.
        Open

            def process_request(self, req):
                presel = req.args.get('preselected')
                if presel and (presel + '/').startswith(req.href.browser() + '/'):
                    req.redirect(presel)
        
        
        Severity: Minor
        Found in trac/versioncontrol/web_ui/browser.py - About 1 hr to fix

          Identical blocks of code found in 2 locations. Consider refactoring.
          Open

              lines[sepIndex] = lines[sepIndex]
                .replace("{1}", oldOffset).replace("{2}", oldLength)
                .replace("{3}", newOffset).replace("{4}", newLength)
                .replace("{5}", title);
          Severity: Major
          Found in trac/htdocs/js/diff.js and 1 other location - About 1 hr to fix
          trac/htdocs/js/diff.js on lines 36..39

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 59.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Identical blocks of code found in 2 locations. Consider refactoring.
          Open

                    lines[sepIndex] = lines[sepIndex]
                      .replace("{1}", oldOffset).replace("{2}", oldLength)
                      .replace("{3}", newOffset).replace("{4}", newLength)
                      .replace("{5}", title);
          Severity: Major
          Found in trac/htdocs/js/diff.js and 1 other location - About 1 hr to fix
          trac/htdocs/js/diff.js on lines 92..95

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 59.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Identical blocks of code found in 2 locations. Consider refactoring.
          Open

              def __setitem__(self, name, value):
                  if name not in self.fields:
                      raise KeyError(name)
                  self.values[name] = value
          Severity: Major
          Found in trac/notification/model.py and 1 other location - About 1 hr to fix
          trac/notification/model.py on lines 41..44

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 40.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Identical blocks of code found in 2 locations. Consider refactoring.
          Open

              def __setitem__(self, name, value):
                  if name not in self.fields:
                      raise KeyError(name)
                  self.values[name] = value
          Severity: Major
          Found in trac/notification/model.py and 1 other location - About 1 hr to fix
          trac/notification/model.py on lines 276..279

          Duplicated Code

          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

          Tuning

          This issue has a mass of 40.

          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

          Refactorings

          Further Reading

          Severity
          Category
          Status
          Source
          Language