rgs1/zk_shell

View on GitHub

Showing 141 of 141 total issues

Function do_chkzk has 35 lines of code (exceeds 25 allowed). Consider refactoring.
Open

    def do_chkzk(self, params):
        """
\x1b[1mNAME\x1b[0m
        chkzk - Consistency check for a cluster

Severity: Minor
Found in zk_shell/shell.py - About 1 hr to fix

    Function children_of has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
    Open

        def children_of(self):
            if self.asynchronous:
                offs = 1 if self.path == "/" else len(self.path) + 1
                for path, stat in StatMap(self.client, self.path, recursive=True).get():
                    if stat.ephemeralOwner == 0:
    Severity: Minor
    Found in zk_shell/copy_util.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function do_edit has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
    Open

        def do_edit(self, params):
            """
    \x1b[1mNAME\x1b[0m
            edit - Opens up an editor to modify and update a znode.
    
    
    Severity: Minor
    Found in zk_shell/shell.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function extract_acl has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
    Open

        def extract_acl(cls, acl):
            """ parse an individual ACL (i.e.: world:anyone:cdrwa) """
            try:
                scheme, rest = acl.split(":", 1)
                credential = ":".join(rest.split(":")[0:-1])
    Severity: Minor
    Found in zk_shell/acl.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function do_json_get has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
    Open

        def do_json_get(self, params):
            """
    \x1b[1mNAME\x1b[0m
            json_get - Get key (or keys, if nested) from a JSON object serialized in the given path
    
    
    Severity: Minor
    Found in zk_shell/shell.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function children_of has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
    Open

        def children_of(self):
            root_path = self.path[0:-1] if self.path.endswith("/") else self.path
            for path, _, files in os.walk(root_path):
                path = path.replace(root_path, "")
                if path.startswith("/"):
    Severity: Minor
    Found in zk_shell/copy_util.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

            try:
                info_by_path = self._zk.ephemerals_info(params.hosts)
            except XClient.CmdFailed as ex:
                self.show_output(str(ex))
                return
    Severity: Major
    Found in zk_shell/shell.py and 1 other location - About 1 hr to fix
    zk_shell/shell.py on lines 2832..2836

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 40.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

            try:
                info_by_id = self._zk.sessions_info(params.hosts)
            except XClient.CmdFailed as ex:
                self.show_output(str(ex))
                return
    Severity: Major
    Found in zk_shell/shell.py and 1 other location - About 1 hr to fix
    zk_shell/shell.py on lines 2775..2779

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 40.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

        def complete_txn(self, cmd_param_text, full_cmd, *rest):
            completers = [self._complete_path for i in range(0, 10)]
            return complete(completers, cmd_param_text, full_cmd, *rest)
    Severity: Major
    Found in zk_shell/shell.py and 1 other location - About 1 hr to fix
    zk_shell/shell.py on lines 1340..1342

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 39.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function do_json_set_many has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
    Open

        def do_json_set_many(self, params):
            """
    \x1b[1mNAME\x1b[0m
            json_set_many - like `json_set`, but for multiple key/value pairs
    
    
    Severity: Minor
    Found in zk_shell/shell.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function _watcher has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
    Open

        def _watcher(self, watched_event):
            for path, stats in self._stats_by_path.items():
                if not watched_event.path.startswith(path):
                    continue
    
    
    Severity: Minor
    Found in zk_shell/watch_manager.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

                if type(current) == list:
                    try:
                        key = int(key)
                    except TypeError:
                        raise cls.Missing(key)
    Severity: Major
    Found in zk_shell/keys.py and 1 other location - About 1 hr to fix
    zk_shell/keys.py on lines 156..161

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 39.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

        def complete_rm(self, cmd_param_text, full_cmd, *rest):
            completers = [self._complete_path for i in range(0, 10)]
            return complete(completers, cmd_param_text, full_cmd, *rest)
    Severity: Major
    Found in zk_shell/shell.py and 1 other location - About 1 hr to fix
    zk_shell/shell.py on lines 1434..1436

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 39.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

                if type(current) == list:
                    # Validate this key works with a list.
                    try:
                        key = int(key)
                    except ValueError:
    Severity: Major
    Found in zk_shell/keys.py and 1 other location - About 1 hr to fix
    zk_shell/keys.py on lines 113..117

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 39.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function get has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
    Open

        def get(self, exclude_recurse=None):
            """
            Paths matching exclude_recurse will not be recursed.
            """
            reqs = Queue()
    Severity: Minor
    Found in zk_shell/tree.py - About 1 hr to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Function __init__ has 8 arguments (exceeds 4 allowed). Consider refactoring.
    Open

        def __init__(self,
    Severity: Major
    Found in zk_shell/shell.py - About 1 hr to fix

      Function fetch has 8 arguments (exceeds 4 allowed). Consider refactoring.
      Open

              def fetch(endpoint, states, znodes, ephemerals, datasize, sessions, zxids, idx):
      Severity: Major
      Found in zk_shell/shell.py - About 1 hr to fix

        Function find_outliers has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
        Open

        def find_outliers(group, delta):
            """
            given a list of values, find those that are apart from the rest by
            `delta`. the indexes for the outliers is returned, if any.
        
        
        Severity: Minor
        Found in zk_shell/util.py - About 55 mins to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function do_exists has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
        Open

            def do_exists(self, params):
                """
        \x1b[1mNAME\x1b[0m
                exists - Gets the znode's stat information
        
        
        Severity: Minor
        Found in zk_shell/shell.py - About 55 mins to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Function do_summary has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
        Open

            def do_summary(self, params):
                """
        \x1b[1mNAME\x1b[0m
                summary - Prints summarized details of a path's children
        
        
        Severity: Minor
        Found in zk_shell/shell.py - About 55 mins to fix

        Cognitive Complexity

        Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

        A method's cognitive complexity is based on a few simple rules:

        • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
        • Code is considered more complex for each "break in the linear flow of the code"
        • Code is considered more complex when "flow breaking structures are nested"

        Further reading

        Severity
        Category
        Status
        Source
        Language