edgewall/trac

View on GitHub
trac/util/__init__.py

Summary

Maintainability
F
1 wk
Test Coverage

File __init__.py has 1137 lines of code (exceeds 250 allowed). Consider refactoring.
Open

# -*- coding: utf-8 -*-
#
# Copyright (C) 2003-2023 Edgewall Software
# Copyright (C) 2003-2006 Jonas Borgström <jonas@edgewall.com>
# Copyright (C) 2006 Matthew Good <trac@matt-good.net>
Severity: Major
Found in trac/util/__init__.py - About 2 days to fix

    Function get_pkginfo has a Cognitive Complexity of 45 (exceeds 5 allowed). Consider refactoring.
    Open

    def get_pkginfo(dist):
        """Get a dictionary containing package information for a package
    
        `dist` can be either a Distribution instance or, as a shortcut,
        directly the module instance, if one can safely infer a Distribution
    Severity: Minor
    Found in trac/util/__init__.py - About 6 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function copytree has a Cognitive Complexity of 28 (exceeds 5 allowed). Consider refactoring.
    Open

    def copytree(src, dst, symlinks=False, skip=[], overwrite=False):
        """Recursively copy a directory tree using copy2() (from shutil.copytree.)
    
        Added a `skip` parameter consisting of absolute paths
        which we don't want to copy.
    Severity: Minor
    Found in trac/util/__init__.py - About 4 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function md5crypt has a Cognitive Complexity of 24 (exceeds 5 allowed). Consider refactoring.
    Open

    def md5crypt(password, salt, magic='$1$'):
        """Based on FreeBSD src/lib/libcrypt/crypt.c 1.2
    
        :param password: the plain text password to crypt
        :param salt: the raw salt
    Severity: Minor
    Found in trac/util/__init__.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function native_path has a Cognitive Complexity of 22 (exceeds 5 allowed). Consider refactoring.
    Open

    def native_path(path):
        """Converts a Windows-style or POSIX-style path to the native style.
    
        i.e. on Windows, convert POSIX path to Windows path, and in a
        POSIX system, convert Windows path to POSIX path.
    Severity: Minor
    Found in trac/util/__init__.py - About 3 hrs to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function create_zipinfo has a Cognitive Complexity of 12 (exceeds 5 allowed). Consider refactoring.
    Open

    def create_zipinfo(filename, mtime=None, dir=False, executable=False, symlink=False,
                       comment=None):
        """Create a instance of `ZipInfo`.
    
        :param filename: file name of the entry
    Severity: Minor
    Found in trac/util/__init__.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function get_lines_from_file has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
    Open

    def get_lines_from_file(filename, lineno, context=0, globals=None):
        """Return `content` number of lines before and after the specified
        `lineno` from the (source code) file identified by `filename`.
    
        Returns a `(lines_before, line, lines_after)` tuple.
    Severity: Minor
    Found in trac/util/__init__.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function get_module_metadata has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
    Open

    def get_module_metadata(module):
        """Get a dictionary containing metadata for a module."""
        info = {}
        for k in ('author', 'author_email', 'maintainer',
                  'maintainer_email', 'home_page', 'url', 'license',
    Severity: Minor
    Found in trac/util/__init__.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function rename has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
    Open

        def rename(src, dst):
            # Try atomic or pseudo-atomic rename
            if _rename(src, dst):
                return
            # Fall back to "move away and replace"
    Severity: Minor
    Found in trac/util/__init__.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __contains__ has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
    Open

        def __contains__(self, x):
            """
            >>> 55 in Ranges()
            False
            """
    Severity: Minor
    Found in trac/util/__init__.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function create_unique_file has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
    Open

    def create_unique_file(path):
        """Create a new file. An index is added if the path exists"""
        parts = os.path.splitext(path)
        flags = os.O_CREAT + os.O_WRONLY + os.O_EXCL
        if hasattr(os, 'O_BINARY'):
    Severity: Minor
    Found in trac/util/__init__.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function to_ranges has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

    def to_ranges(revs):
        """Converts a list of revisions to a minimal set of ranges.
    
        >>> to_ranges([2, 12, 3, 6, 9, 1, 5, 11])
        '1-3,5-6,9,11-12'
    Severity: Minor
    Found in trac/util/__init__.py - About 55 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function as_bool has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

    def as_bool(value, default=False):
        """Convert the given value to a `bool`.
    
        If `value` is a string, return `True` for any of "yes", "true",
        "enabled", "on" or non-zero numbers, ignoring case. For non-string
    Severity: Minor
    Found in trac/util/__init__.py - About 55 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function create_zipinfo has 6 arguments (exceeds 4 allowed). Consider refactoring.
    Open

    def create_zipinfo(filename, mtime=None, dir=False, executable=False, symlink=False,
    Severity: Minor
    Found in trac/util/__init__.py - About 45 mins to fix

      Function appendrange has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
      Open

          def appendrange(self, r):
              """Add ranges to the current one.
      
              A range is specified as a string of the form "low-high", and
              `r` can be a list of such strings, a string containing comma-separated
      Severity: Minor
      Found in trac/util/__init__.py - About 45 mins to fix

      Cognitive Complexity

      Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

      A method's cognitive complexity is based on a few simple rules:

      • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
      • Code is considered more complex for each "break in the linear flow of the code"
      • Code is considered more complex when "flow breaking structures are nested"

      Further reading

      Function create_file has 5 arguments (exceeds 4 allowed). Consider refactoring.
      Open

      def create_file(path, data='', mode='w', encoding='utf-8', errors='strict'):
      Severity: Minor
      Found in trac/util/__init__.py - About 35 mins to fix

        Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
        Open

            def __init__(self, filename, mode='r', bufsize=-1, encoding='utf-8',
        Severity: Minor
        Found in trac/util/__init__.py - About 35 mins to fix

          Function _as_numeric has 5 arguments (exceeds 4 allowed). Consider refactoring.
          Open

          def _as_numeric(numeric_type, s, default, min, max):
          Severity: Minor
          Found in trac/util/__init__.py - About 35 mins to fix

            Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
            Open

                def __init__(self, path, mode='w', bufsize=-1, encoding='utf-8',
            Severity: Minor
            Found in trac/util/__init__.py - About 35 mins to fix

              Function copytree has 5 arguments (exceeds 4 allowed). Consider refactoring.
              Open

              def copytree(src, dst, symlinks=False, skip=[], overwrite=False):
              Severity: Minor
              Found in trac/util/__init__.py - About 35 mins to fix

                Function terminate has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
                Open

                def terminate(process):
                    """Terminate the process.
                
                    If the process has already finished and has not been waited for,
                    the function does not raise OSError and WindowsError exceptions unlike
                Severity: Minor
                Found in trac/util/__init__.py - About 35 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function get_sources has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
                Open

                def get_sources(path):
                    """Return a dictionary mapping Python module source paths to the
                    distributions that contain them.
                    """
                    sources = {}
                Severity: Minor
                Found in trac/util/__init__.py - About 35 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function truncate has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
                Open

                    def truncate(self, max):
                        """Truncate the Ranges by setting a maximal allowed value.
                
                        Note that this `max` can be a value in a gap, so the only guarantee
                        is that `self.b` will be lesser than or equal to `max`.
                Severity: Minor
                Found in trac/util/__init__.py - About 35 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function __init__ has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
                Open

                    def __init__(self, command, input=None, capturestderr=None):
                        outfile = tempfile.mktemp()
                        command = '( %s ) > %s' % (command, outfile)
                        if input is not None:
                            infile = tempfile.mktemp()
                Severity: Minor
                Found in trac/util/__init__.py - About 35 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Function get_reporter_id has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
                Open

                def get_reporter_id(req, arg_name=None):
                    """Get most informative "reporter" identity out of a request.
                
                    That's the `Request`'s authname if not 'anonymous', or a `Request`
                    argument, or the session name and e-mail, or only the name or only
                Severity: Minor
                Found in trac/util/__init__.py - About 35 mins to fix

                Cognitive Complexity

                Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                A method's cognitive complexity is based on a few simple rules:

                • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                • Code is considered more complex for each "break in the linear flow of the code"
                • Code is considered more complex when "flow breaking structures are nested"

                Further reading

                Avoid too many return statements within this function.
                Open

                        return bool(value)
                Severity: Major
                Found in trac/util/__init__.py - About 30 mins to fix

                  Avoid too many return statements within this function.
                  Open

                          return default
                  Severity: Major
                  Found in trac/util/__init__.py - About 30 mins to fix

                    Avoid too many return statements within this function.
                    Open

                                    return toplevel in dist.get_metadata_lines('top_level.txt')
                    Severity: Major
                    Found in trac/util/__init__.py - About 30 mins to fix

                      Avoid too many return statements within this function.
                      Open

                                              return True
                      Severity: Major
                      Found in trac/util/__init__.py - About 30 mins to fix

                        Avoid too many return statements within this function.
                        Open

                                    return dist.key == toplevel.lower()
                        Severity: Major
                        Found in trac/util/__init__.py - About 30 mins to fix

                          Function safe__import__ has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                          Open

                          def safe__import__(module_name):
                              """
                              Safe imports: rollback after a failed import.
                          
                              Initially inspired from the RollbackImporter in PyUnit,
                          Severity: Minor
                          Found in trac/util/__init__.py - About 25 mins to fix

                          Cognitive Complexity

                          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                          A method's cognitive complexity is based on a few simple rules:

                          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                          • Code is considered more complex for each "break in the linear flow of the code"
                          • Code is considered more complex when "flow breaking structures are nested"

                          Further reading

                          Function touch_file has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                          Open

                              def touch_file(filename):
                                  """Update modified time of the given file. The file is created if
                                  missing."""
                                  try:
                                      os.utime(filename, None)
                          Severity: Minor
                          Found in trac/util/__init__.py - About 25 mins to fix

                          Cognitive Complexity

                          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                          A method's cognitive complexity is based on a few simple rules:

                          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                          • Code is considered more complex for each "break in the linear flow of the code"
                          • Code is considered more complex when "flow breaking structures are nested"

                          Further reading

                          Function _reduce has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                          Open

                              def _reduce(self):
                                  """Come up with the minimal representation of the ranges"""
                                  p = self.pairs
                                  p.sort()
                                  i = 0
                          Severity: Minor
                          Found in trac/util/__init__.py - About 25 mins to fix

                          Cognitive Complexity

                          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                          A method's cognitive complexity is based on a few simple rules:

                          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                          • Code is considered more complex for each "break in the linear flow of the code"
                          • Code is considered more complex when "flow breaking structures are nested"

                          Further reading

                          Function __enter__ has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                          Open

                              def __enter__(self):
                                  if not self.filename:
                                      f = sys.stdin if 'r' in self.mode else sys.stdout
                                      if 'b' in self.mode:
                                          f = f.buffer
                          Severity: Minor
                          Found in trac/util/__init__.py - About 25 mins to fix

                          Cognitive Complexity

                          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                          A method's cognitive complexity is based on a few simple rules:

                          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                          • Code is considered more complex for each "break in the linear flow of the code"
                          • Code is considered more complex when "flow breaking structures are nested"

                          Further reading

                          Function _rename has a Cognitive Complexity of 6 (exceeds 5 allowed). Consider refactoring.
                          Open

                                  def _rename(src, dst):
                                      if not isinstance(src, str):
                                          src = str(src, sys.getfilesystemencoding())
                                      if not isinstance(dst, str):
                                          dst = str(dst, sys.getfilesystemencoding())
                          Severity: Minor
                          Found in trac/util/__init__.py - About 25 mins to fix

                          Cognitive Complexity

                          Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

                          A method's cognitive complexity is based on a few simple rules:

                          • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
                          • Code is considered more complex for each "break in the linear flow of the code"
                          • Code is considered more complex when "flow breaking structures are nested"

                          Further reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                              while 1:
                                  try:
                                      return path, os.fdopen(os.open(path, flags, 0o666), 'wb')
                                  except OSError as e:
                                      if e.errno != errno.EEXIST:
                          Severity: Major
                          Found in trac/util/__init__.py and 1 other location - About 6 hrs to fix
                          trac/attachment.py on lines 964..973

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 102.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                              except IOError as e:
                                  err = _("Failed to read %(metadata)s file for %(dist)s: %(err)s",
                                          metadata=metadata, dist=dist, err=to_unicode(e))
                                  for attr in attrs:
                                      info[normalize(attr)] = err
                          Severity: Major
                          Found in trac/util/__init__.py and 1 other location - About 1 hr to fix
                          trac/util/__init__.py on lines 896..900

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 45.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                              except email.errors.MessageError as e:
                                  err = _("Failed to parse %(metadata)s file for %(dist)s: %(err)s",
                                          metadata=metadata, dist=dist, err=to_unicode(e))
                                  for attr in attrs:
                                      info[normalize(attr)] = err
                          Severity: Major
                          Found in trac/util/__init__.py and 1 other location - About 1 hr to fix
                          trac/util/__init__.py on lines 891..895

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 45.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                                  for a, b in self.pairs:
                                      if a == b:
                                          r.append(str(a))
                                      else:
                                          r.append("%d-%d" % (a, b))
                          Severity: Major
                          Found in trac/util/__init__.py and 1 other location - About 1 hr to fix
                          trac/util/__init__.py on lines 1272..1276

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 42.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                              def store():
                                  if end == begin:
                                      ranges.append(str(begin))
                                  else:
                                      ranges.append('%d-%d' % (begin, end))
                          Severity: Major
                          Found in trac/util/__init__.py and 1 other location - About 1 hr to fix
                          trac/util/__init__.py on lines 1205..1209

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 42.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                                  if p:
                                      self.a = p[0][0]   # min value
                                      self.b = p[-1][1]  # max value
                          Severity: Major
                          Found in trac/util/__init__.py and 1 other location - About 1 hr to fix
                          trac/versioncontrol/web_ui/log.py on lines 452..454

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 39.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 3 locations. Consider refactoring.
                          Open

                              def __init__(self, filename, mode='r', bufsize=-1, encoding='utf-8',
                                           errors='strict'):
                                  self.filename = filename
                                  self.mode = mode
                                  self.bufsize = bufsize
                          Severity: Major
                          Found in trac/util/__init__.py and 2 other locations - About 50 mins to fix
                          trac/db/schema.py on lines 47..53
                          trac/notification/api.py on lines 205..210

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 36.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                                          return any(resource_name == os.path.normpath(name)
                                                     for name
                                                     in dist.get_metadata_lines('installed-files.txt'))
                          Severity: Minor
                          Found in trac/util/__init__.py and 1 other location - About 45 mins to fix
                          trac/util/__init__.py on lines 845..846

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 35.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                                          return any(resource_name == os.path.normpath(name)
                                                     for name in dist.get_metadata_lines('SOURCES.txt'))
                          Severity: Minor
                          Found in trac/util/__init__.py and 1 other location - About 45 mins to fix
                          trac/util/__init__.py on lines 840..842

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 35.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                                      for i in range(4):
                                          yield itoa64[v & 0x3f]
                                          v >>= 6
                          Severity: Minor
                          Found in trac/util/__init__.py and 1 other location - About 40 mins to fix
                          trac/util/__init__.py on lines 1040..1042

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 34.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          Similar blocks of code found in 2 locations. Consider refactoring.
                          Open

                                  for i in range(2):
                                      yield itoa64[v & 0x3f]
                                      v >>= 6
                          Severity: Minor
                          Found in trac/util/__init__.py and 1 other location - About 40 mins to fix
                          trac/util/__init__.py on lines 1035..1037

                          Duplicated Code

                          Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

                          Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

                          When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

                          Tuning

                          This issue has a mass of 34.

                          We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

                          The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

                          If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

                          See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

                          Refactorings

                          Further Reading

                          There are no issues that match your filters.

                          Category
                          Status