kengz/spacy-nlp

View on GitHub
src/py/nlp.py

Summary

Maintainability
F
1 wk
Test Coverage

Function parse_adj has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
Open

def parse_adj(input, options):
    adjCount = 0
    adjArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['ner', 'parser', 'textcat']):
Severity: Minor
Found in src/py/nlp.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function parse_date has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
Open

def parse_date(input, options):
    dateCount = 0
    dateArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['textcat']):
Severity: Minor
Found in src/py/nlp.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function parse_time has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
Open

def parse_time(input, options):
    timeCount = 0
    timeArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['textcat']):
Severity: Minor
Found in src/py/nlp.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function parse_nouns has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
Open

def parse_nouns(input, options):
    nounCount = 0
    nounArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['ner', 'parser', 'textcat']):
Severity: Minor
Found in src/py/nlp.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function parse_verbs has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
Open

def parse_verbs(input, options):
    verbCount = 0
    verbArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['ner', 'parser', 'textcat']):
Severity: Minor
Found in src/py/nlp.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Function parse_named_entities has a Cognitive Complexity of 19 (exceeds 5 allowed). Consider refactoring.
Open

def parse_named_entities(input, options):
    entCount = 0
    entArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['textcat']):
Severity: Minor
Found in src/py/nlp.py - About 2 hrs to fix

Cognitive Complexity

Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.

A method's cognitive complexity is based on a few simple rules:

  • Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
  • Code is considered more complex for each "break in the linear flow of the code"
  • Code is considered more complex when "flow breaking structures are nested"

Further reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

def parse_date(input, options):
    dateCount = 0
    dateArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['textcat']):
Severity: Major
Found in src/py/nlp.py and 1 other location - About 1 day to fix
src/py/nlp.py on lines 220..242

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 205.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

def parse_nouns(input, options):
    nounCount = 0
    nounArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['ner', 'parser', 'textcat']):
Severity: Major
Found in src/py/nlp.py and 2 other locations - About 1 day to fix
src/py/nlp.py on lines 120..142
src/py/nlp.py on lines 145..167

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 205.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

def parse_adj(input, options):
    adjCount = 0
    adjArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['ner', 'parser', 'textcat']):
Severity: Major
Found in src/py/nlp.py and 2 other locations - About 1 day to fix
src/py/nlp.py on lines 95..117
src/py/nlp.py on lines 120..142

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 205.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 2 locations. Consider refactoring.
Open

def parse_time(input, options):
    timeCount = 0
    timeArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['textcat']):
Severity: Major
Found in src/py/nlp.py and 1 other location - About 1 day to fix
src/py/nlp.py on lines 195..217

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 205.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Similar blocks of code found in 3 locations. Consider refactoring.
Open

def parse_verbs(input, options):
    verbCount = 0
    verbArray = []
    resultArray = []
    for doc in nlp.pipe(input, disable=['ner', 'parser', 'textcat']):
Severity: Major
Found in src/py/nlp.py and 2 other locations - About 1 day to fix
src/py/nlp.py on lines 95..117
src/py/nlp.py on lines 145..167

Duplicated Code

Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:

Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).

Tuning

This issue has a mass of 205.

We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.

The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.

If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.

See codeclimate-duplication's documentation for more information about tuning the mass threshold in your .codeclimate.yml.

Refactorings

Further Reading

Line too long (109 > 79 characters)
Open

                if ent.label_ not in {'DATE', 'TIME', 'ORDINAL', 'QUANTITY', 'PERCENT', 'CARDINAL', 'MONEY'}:
Severity: Minor
Found in src/py/nlp.py by pep8

Limit all lines to a maximum of 79 characters.

There are still many devices around that are limited to 80 character
lines; plus, limiting windows to 80 characters makes it possible to
have several windows side-by-side.  The default wrapping on such
devices looks ugly.  Therefore, please limit all lines to a maximum
of 79 characters. For flowing long blocks of text (docstrings or
comments), limiting the length to 72 characters is recommended.

Reports error E501.

Line too long (109 > 79 characters)
Open

                if ent.label_ not in {'DATE', 'TIME', 'ORDINAL', 'QUANTITY', 'PERCENT', 'CARDINAL', 'MONEY'}:
Severity: Minor
Found in src/py/nlp.py by pep8

Limit all lines to a maximum of 79 characters.

There are still many devices around that are limited to 80 character
lines; plus, limiting windows to 80 characters makes it possible to
have several windows side-by-side.  The default wrapping on such
devices looks ugly.  Therefore, please limit all lines to a maximum
of 79 characters. For flowing long blocks of text (docstrings or
comments), limiting the length to 72 characters is recommended.

Reports error E501.

There are no issues that match your filters.

Category
Status