Showing 106 of 141 total issues
Function _merge_sounding
has a Cognitive Complexity of 141 (exceeds 15 allowed). Consider refactoring. Open
def _merge_sounding(self, parts):
"""Merge unmerged sounding data."""
merged = {'STID': parts['STID'],
'STNM': parts['STNM'],
'SLAT': parts['SLAT'],
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Identical blocks of code found in 2 locations. Consider refactoring. Open
def _get_path_locations(self, segment_offsets, renderer):
# Calculate increment of path length occupied by each marker drawn
inc = self._step_size(renderer)
# Find out how many markers that will accommodate, as well as remainder space
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 232.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
Identical blocks of code found in 2 locations. Consider refactoring. Open
def _get_path_locations(self, segment_offsets, renderer):
# Calculate increment of path length occupied by each marker drawn
inc = self._step_size(renderer)
# Find out how many markers that will accommodate, as well as remainder space
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 232.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
Function gdxarray
has a Cognitive Complexity of 73 (exceeds 15 allowed). Consider refactoring. Open
def gdxarray(self, parameter=None, date_time=None, coordinate=None,
level=None, date_time2=None, level2=None):
"""Select grids and output as list of xarray DataArrays.
Subset the data by parameter values. The default is to not
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function _unpack_grid
has a Cognitive Complexity of 71 (exceeds 15 allowed). Consider refactoring. Open
def _unpack_grid(self, packing_type, part):
"""Read raw GEMPAK grid integers and unpack into floats."""
if packing_type == PackingType.none:
lendat = self.data_header_length - part.header_length - 1
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function parse_metar
has a Cognitive Complexity of 68 (exceeds 15 allowed). Consider refactoring. Open
def parse_metar(metar_text, year, month, station_metadata=station_info):
"""Parse a METAR report in text form into a list of named tuples.
Parameters
----------
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function snxarray
has a Cognitive Complexity of 59 (exceeds 15 allowed). Consider refactoring. Open
def snxarray(self, station_id=None, station_number=None,
date_time=None, state=None, country=None):
"""Select soundings and output as list of xarray Datasets.
Subset the data by parameter values. The default is to not
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function _key_types
has a Cognitive Complexity of 55 (exceeds 15 allowed). Consider refactoring. Open
def _key_types(self, keys):
"""Determine header information from a set of keys."""
return [(key, '4s', self._decode_strip) if key == 'STID'
else (key, 'i') if key == 'STNM'
else (key, 'i', lambda x: x / 100) if key == 'SLAT'
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function _interp_logp_data
has a Cognitive Complexity of 55 (exceeds 15 allowed). Consider refactoring. Open
def _interp_logp_data(sounding, missing=-9999):
"""Interpolate missing sounding data.
This function is similar to the MR_MISS subroutine in GEMPAK.
"""
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function process_units
has a Cognitive Complexity of 53 (exceeds 15 allowed). Consider refactoring. Open
def process_units(
input_dimensionalities,
output_dimensionalities,
output_to=None,
ignore_inputs_for_output=None
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function sfjson
has a Cognitive Complexity of 51 (exceeds 15 allowed). Consider refactoring. Open
def sfjson(self, station_id=None, station_number=None,
date_time=None, state=None, country=None,
include_special=False):
"""Select surface stations and output as list of JSON objects.
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function __init__
has a Cognitive Complexity of 51 (exceeds 15 allowed). Consider refactoring. Open
def __init__(self, file, *args, **kwargs):
super().__init__(file)
# Row Headers
self._buffer.jump_to(self._start, _word_to_position(self.prod_desc.row_headers_ptr))
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function __init__
has a Cognitive Complexity of 47 (exceeds 15 allowed). Consider refactoring. Open
def __init__(self, file, *args, **kwargs):
super().__init__(file)
# Row Headers
self._buffer.jump_to(self._start, _word_to_position(self.prod_desc.row_headers_ptr))
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Identical blocks of code found in 2 locations. Consider refactoring. Open
for i, inds in enumerate(tri.simplices):
pts = tri.points[inds]
x, y = np.vstack((pts, pts[0])).T
ax.plot(x, y)
ax.annotate(i, xy=(np.mean(x), np.mean(y)))
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 91.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
Identical blocks of code found in 2 locations. Consider refactoring. Open
for i, inds in enumerate(tri.simplices):
pts = tri.points[inds]
x, y = np.vstack((pts, pts[0])).T
ax.plot(x, y)
ax.annotate(i, xy=(np.mean(x), np.mean(y)))
- Read upRead up
Duplicated Code
Duplicated code can lead to software that is hard to understand and difficult to change. The Don't Repeat Yourself (DRY) principle states:
Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
When you violate DRY, bugs and maintenance problems are sure to follow. Duplicated code has a tendency to both continue to replicate and also to diverge (leaving bugs as two similar implementations differ in subtle ways).
Tuning
This issue has a mass of 91.
We set useful threshold defaults for the languages we support but you may want to adjust these settings based on your project guidelines.
The threshold configuration represents the minimum mass a code block must have to be analyzed for duplication. The lower the threshold, the more fine-grained the comparison.
If the engine is too easily reporting duplication, try raising the threshold. If you suspect that the engine isn't catching enough duplication, try lowering the threshold. The best setting tends to differ from language to language.
See codeclimate-duplication
's documentation for more information about tuning the mass threshold in your .codeclimate.yml
.
Refactorings
- Extract Method
- Extract Class
- Form Template Method
- Introduce Null Object
- Pull Up Method
- Pull Up Field
- Substitute Algorithm
Further Reading
- Don't Repeat Yourself on the C2 Wiki
- Duplicated Code on SourceMaking
- Refactoring: Improving the Design of Existing Code by Martin Fowler. Duplicated Code, p76
Function _merge_winds_height
has a Cognitive Complexity of 45 (exceeds 15 allowed). Consider refactoring. Open
def _merge_winds_height(self, merged, parts, nsgw, nasw, istart):
"""Merge wind sections on height surfaces."""
size = len(merged['HGHT'])
psfc = merged['PRES'][0]
zsfc = merged['HGHT'][0]
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function _get_bound_pressure_height
has a Cognitive Complexity of 44 (exceeds 15 allowed). Consider refactoring. Open
def _get_bound_pressure_height(pressure, bound, height=None, interpolate=True):
"""Calculate the bounding pressure and height in a layer.
Given pressure, optional heights and a bound, return either the closest pressure/height
or interpolated pressure/height. If no heights are provided, a standard atmosphere
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
Function draw
has a Cognitive Complexity of 41 (exceeds 15 allowed). Consider refactoring. Open
def draw(self):
"""Draw the panel."""
# Only need to run if we've actually changed.
if self._need_redraw:
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"
Further reading
MetPyDataArrayAccessor
has 33 functions (exceeds 20 allowed). Consider refactoring. Open
class MetPyDataArrayAccessor:
r"""Provide custom attributes and methods on xarray DataArrays for MetPy functionality.
This accessor provides several convenient attributes and methods through the ``.metpy``
attribute on a DataArray. For example, MetPy can identify the coordinate corresponding
Function _interp_moist_height
has a Cognitive Complexity of 38 (exceeds 15 allowed). Consider refactoring. Open
def _interp_moist_height(sounding, missing=-9999):
"""Interpolate moist hydrostatic height.
This function mimics the functionality of the MR_SCMZ
subroutine in GEMPAK. This the default behavior when
- Read upRead up
Cognitive Complexity
Cognitive Complexity is a measure of how difficult a unit of code is to intuitively understand. Unlike Cyclomatic Complexity, which determines how difficult your code will be to test, Cognitive Complexity tells you how difficult your code will be to read and comprehend.
A method's cognitive complexity is based on a few simple rules:
- Code is not considered more complex when it uses shorthand that the language provides for collapsing multiple statements into one
- Code is considered more complex for each "break in the linear flow of the code"
- Code is considered more complex when "flow breaking structures are nested"