whylabs/whylogs-python

View on GitHub

Showing 3,918 of 3,918 total issues

TODO found
Open

                api_client=None,  # TODO: support custom clients

TODO found
Open

            # TODO: need to transform to utf-32 or handle surrogates

TODO found
Open

            # TODO: iterating over each column in order assumes single column metrics

TODO found
Open

      this.fileExtension = ".bin"; // TODO: should we make this .whylogs?

TODO found
Open

  // TODO: implement read and write when I make the reader and writer

TODO found
Open

    // TODO: implement (not implemented in java either

Possible hardcoded password: '~'
Open

        if token == "~":

Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.
Open

            assert res is not None

Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.
Open

        assert U0.shape[0] == U1.shape[0]

TODO found
Open

        "def load_testing_data() -> pd.DataFrame:  # TODO remove after user testing\n",

TODO found
Open

            # TODO: remove when whylabs supports merge writes.

TODO found
Open

            # TODO: multi segment file format requires multiple offset calculations.

TODO found
Open

        TODO: type this

TODO found
Open

        # TODO: update format to support storing other field metadata, for now just support output field in v0 message format

TODO found
Open

    // TODO: implement segment processing here

TODO found
Open

      // TODO: log warning if it's not there

Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.
Open

    assert stream is not None

Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.
Open

            assert res is not None

Use of assert detected. The enclosed code will be removed when compiling to optimised byte code.
Open

        assert isinstance(target, SegmentedResultSet), "target must be a SegmentedResultSet"

TODO found
Open

        for matrix in chain(data.list.tensors or [], pandas_tensors):  # TODO: stack these
Severity
Category
Status
Source
Language