xeroc/python-graphenelib

View on GitHub

Showing 136 of 136 total issues

Function __init__ has 6 arguments (exceeds 4 allowed). Consider refactoring.
Open

    def __init__(self, data, klass=None, lazy=False, use_cache=True, *args, **kwargs):
Severity: Minor
Found in graphenecommon/blockchainobject.py - About 45 mins to fix

    Function awaitTxConfirmation has a Cognitive Complexity of 8 (exceeds 5 allowed). Consider refactoring.
    Open

        def awaitTxConfirmation(self, transaction, limit=10):
            """Returns the transaction as seen by the blockchain after being
            included into a block
    
            .. note:: If you want instant confirmation, you need to instantiate
    Severity: Minor
    Found in graphenecommon/blockchain.py - About 45 mins to fix

    Cognitive Complexity

    Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

    A method's cognitive complexity is based on a few simple rules:

    • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
    • Code is considered more complex for each "break in the linear flow of the code"
    • Code is considered more complex when "flow breaking structures are nested"

    Further reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

            if not rpcuser and "rpcuser" in self.config:
                rpcuser = self.config["rpcuser"]
    Severity: Minor
    Found in graphenecommon/chain.py and 1 other location - About 40 mins to fix
    graphenecommon/chain.py on lines 80..81

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 34.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

        def __getitem__(self, key):
            if not self._fetched:
                self.refresh()
            return dict.__getitem__(self, key)
    Severity: Minor
    Found in graphenecommon/blockchainobject.py and 1 other location - About 40 mins to fix
    graphenecommon/blockchainobject.py on lines 71..74

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 34.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

        def __contains__(self, key):
            if not self._fetched:
                self.refresh()
            return dict.__contains__(self, key)
    Severity: Minor
    Found in graphenecommon/blockchainobject.py and 1 other location - About 40 mins to fix
    graphenecommon/blockchainobject.py on lines 58..61

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 34.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

            if not rpcpassword and "rpcpassword" in self.config:
                rpcpassword = self.config["rpcpassword"]
    Severity: Minor
    Found in graphenecommon/chain.py and 1 other location - About 40 mins to fix
    graphenecommon/chain.py on lines 77..78

    Duplicated Code

    Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

    Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

    When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

    Tuning

    This issue has a mass of 34.

    We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

    The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

    If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

    See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

    Refactorings

    Further Reading

    Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
    Open

        def __init__(
    Severity: Minor
    Found in graphenecommon/blockchain.py - About 35 mins to fix

      Function __init__ has 5 arguments (exceeds 4 allowed). Consider refactoring.
      Open

          def __init__(self, urls, user=None, password=None, connect=True, **kwargs):
      Severity: Minor
      Found in grapheneapi/api.py - About 35 mins to fix

        Function history has 5 arguments (exceeds 4 allowed). Consider refactoring.
        Open

            def history(self, first=0, last=0, limit=-1, only_ops=[], exclude_ops=[]):
        Severity: Minor
        Found in graphenecommon/account.py - About 35 mins to fix

          Function new_proposal has 5 arguments (exceeds 4 allowed). Consider refactoring.
          Open

              def new_proposal(
          Severity: Minor
          Found in graphenecommon/chain.py - About 35 mins to fix

            Function list_operations has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
            Open

                def list_operations(self):
                    ret = list()
                    for o in self.ops:
                        if isinstance(o, ProposalBuilder):
                            prop = o.get_raw()
            Severity: Minor
            Found in graphenecommon/transactionbuilder.py - About 35 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function __init__ has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
            Open

                def __init__(self, data, prefix=None):
                    self.set_prefix(prefix)
                    if isinstance(data, Base58):
                        data = repr(data)
                    if all(c in string.hexdigits for c in data):
            Severity: Minor
            Found in graphenebase/base58.py - About 35 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function new_proposal has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
            Open

                def new_proposal(
                    self,
                    parent=None,
                    proposer=None,
                    proposal_expiration=None,
            Severity: Minor
            Found in graphenecommon/chain.py - About 35 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function wait_for_and_get_block has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
            Open

                def wait_for_and_get_block(self, block_number, blocks_waiting_for=None):
                    """Get the desired block from the chain, if the current head block is
                    smaller (for both head and irreversible) then we wait, but a
                    maxmimum of blocks_waiting_for * max_block_wait_repetition time
                    before failure.
            Severity: Minor
            Found in graphenecommon/blockchain.py - About 35 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function connect has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
            Open

                def connect(self, node="", rpcuser="", rpcpassword="", **kwargs):
                    """Connect to blockchain network (internal use only)"""
                    if not node:
                        if "node" in self.config:
                            node = self.config["node"]
            Severity: Minor
            Found in graphenecommon/chain.py - About 35 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Similar blocks of code found in 2 locations. Consider refactoring.
            Open

                        try:
                            # if that failed, we assume that we have sent the memo
                            memo_wif = self.blockchain.wallet.getPrivateKeyForPublicKey(
                                message["from"]
                            )
            Severity: Minor
            Found in graphenecommon/memo.py and 1 other location - About 35 mins to fix
            graphenecommon/memo.py on lines 124..126

            Duplicated Code

            Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

            Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

            When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

            Tuning

            This issue has a mass of 33.

            We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

            The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

            If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

            See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

            Refactorings

            Further Reading

            Function verify has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
            Open

                def verify(self, **kwargs):
                    """Verify a message with an account's memo key
            
                    :param str account: (optional) the account that owns the bet
                        (defaults to ``default_account``)
            Severity: Minor
            Found in graphenecommon/message.py - About 35 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function __init__ has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
            Open

                def __init__(self, *args, **kwargs):
            
                    if len(args) == 1 and isinstance(args[0], self.__class__):
                        # In this case, there is only one argument which is already an
                        # instance of a class that inherits Graphene Object, hence, we copy
            Severity: Minor
            Found in graphenebase/objects.py - About 35 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function getKeyType has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
            Open

                def getKeyType(self, account, pub):
                    """Get key type"""
                    for authority in ["owner", "active"]:
                        for key in account[authority]["key_auths"]:
                            if str(pub) == key[0]:
            Severity: Minor
            Found in graphenecommon/wallet.py - About 35 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Function verify_message has a Cognitive Complexity of 7 (exceeds 5 allowed). Consider refactoring.
            Open

            def verify_message(message, signature, hashfn=hashlib.sha256):
                if not isinstance(message, bytes):
                    message = bytes(message, "utf-8")
                if not isinstance(signature, bytes):  # pragma: no cover
                    signature = bytes(signature, "utf-8")
            Severity: Minor
            Found in graphenebase/ecdsa.py - About 35 mins to fix

            Cognitive Complexity

            Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

            A method's cognitive complexity is based on a few simple rules:

            • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
            • Code is considered more complex for each "break in the linear flow of the code"
            • Code is considered more complex when "flow breaking structures are nested"

            Further reading

            Severity
            Category
            Status
            Source
            Language