IBM/causallib

View on GitHub

Showing 111 of 111 total issues

Function _map_properties_to_variables has a Cognitive Complexity of 11 (exceeds 5 allowed). Consider refactoring.
Open

def _map_properties_to_variables(values, keys, var_type, value_type):
"""
Maps between covariate variables properties to these properties.
 
Args:
Severity: Minor
Found in causallib/simulation/CausalSimulator3.py - About 1 hr to fix

Similar blocks of code found in 3 locations. Consider refactoring.
Open

topology.loc[generated_vars, given_vars] = np.random.binomial(n=1, p=p,
size=(n_generated_vars, n_given_vars)).astype(bool)
Severity: Major
Found in causallib/simulation/CausalSimulator3.py and 2 other locations - About 1 hr to fix
causallib/simulation/CausalSimulator3.py on lines 1565..1566
causallib/simulation/CausalSimulator3.py on lines 1569..1569

Function format_for_training has 30 lines of code (exceeds 25 allowed). Consider refactoring.
Open

def format_for_training(self, X, propensities, cf, headers_chars=None, exclude_hidden_vars=True):
"""
prepare to output. merge the data into two DataFrames - an observed one and one gathering the counterfactuals.
 
Args:
Severity: Minor
Found in causallib/simulation/CausalSimulator3.py - About 1 hr to fix

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    propensities.columns = ["_".join([propensity_char, str(treatment), str(category)])
    for treatment, category in propensities.columns.values]
    Severity: Major
    Found in causallib/simulation/CausalSimulator3.py and 1 other location - About 1 hr to fix
    causallib/simulation/CausalSimulator3.py on lines 1471..1472

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    cf.columns = ["_".join([counterfact_char, str(outcome), str(treatment_category)])
    for outcome, treatment_category in cf.columns.values]
    Severity: Major
    Found in causallib/simulation/CausalSimulator3.py and 1 other location - About 1 hr to fix
    causallib/simulation/CausalSimulator3.py on lines 1469..1470

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    for split in splits.keys():
    if split in [0]:
    dev += splits[split]
    else:
    train += splits[split]
    Severity: Major
    Found in causallib/contrib/hemm/hemm_utilities.py and 1 other location - About 1 hr to fix
    causallib/contrib/hemm/hemm_utilities.py on lines 113..116

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    if split in [2]:
    dev += splits[split]
    else:
    train += splits[split]
    Severity: Major
    Found in causallib/contrib/hemm/hemm_utilities.py and 1 other location - About 1 hr to fix
    causallib/contrib/hemm/hemm_utilities.py on lines 95..99

    Function slope_graph has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
    Open

    def slope_graph(
    left, right, thresh=None, label_imbalanced=True, color_below="C0", color_above="C1", marker="o", ax=None
    ):
    ax = ax or plt.gca()
    left_xtick = left.name or "unweighted"
    Severity: Minor
    Found in causallib/evaluation/plots/plots.py - About 1 hr to fix

    Similar blocks of code found in 3 locations. Consider refactoring.
    Open

    self.mu = nn.ParameterList(nn.Parameter(mu[i]) for i in range(self.K))
    Severity: Major
    Found in causallib/contrib/hemm/hemm.py and 2 other locations - About 1 hr to fix
    causallib/contrib/hemm/hemm.py on lines 85..85
    causallib/contrib/hemm/hemm.py on lines 86..86

    Similar blocks of code found in 3 locations. Consider refactoring.
    Open

    self.std = nn.ParameterList(nn.Parameter(std[i]) for i in range(self.K))
    Severity: Major
    Found in causallib/contrib/hemm/hemm.py and 2 other locations - About 1 hr to fix
    causallib/contrib/hemm/hemm.py on lines 84..84
    causallib/contrib/hemm/hemm.py on lines 86..86

    Function compute_pvals has a Cognitive Complexity of 10 (exceeds 5 allowed). Consider refactoring.
    Open

    def compute_pvals(self, X, y):
    # TODO: export to stats_utils?
    is_y_binary = (len(np.unique(y)) == 2)
    # is_binary_feature = np.sum(((X != np.nanmin(X, axis=0)[np.newaxis, :]) &
    # (X != np.nanmax(X, axis=0)[np.newaxis, :])), axis=0) == 0
    Severity: Minor
    Found in causallib/preprocessing/filters.py - About 1 hr to fix

    Similar blocks of code found in 3 locations. Consider refactoring.
    Open

    self.p = nn.ParameterList(nn.Parameter(p[i]) for i in range(self.K))
    Severity: Major
    Found in causallib/contrib/hemm/hemm.py and 2 other locations - About 1 hr to fix
    causallib/contrib/hemm/hemm.py on lines 84..84
    causallib/contrib/hemm/hemm.py on lines 85..85

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    probabilities = 1.0 / (1 + np.exp(x_signal - np.repeat(t, x_signal.size)))
    Severity: Major
    Found in causallib/simulation/CausalSimulator3.py and 1 other location - About 1 hr to fix
    causallib/simulation/CausalSimulator3.py on lines 982..982

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    cur_propensity = (1.0 / (1 + np.exp((x_continuous - np.repeat(t, x_continuous.size))))) # type: pd.Series
    Severity: Major
    Found in causallib/simulation/CausalSimulator3.py and 1 other location - About 1 hr to fix
    causallib/simulation/CausalSimulator3.py on lines 847..847

    Function generate_random_topology has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

    def generate_random_topology(n_covariates, p, n_treatments=1, n_outcomes=1, n_censoring=0, given_vars=(),
    p_hidden=0.0):
    """
    Creates a random graph topology, suitable for describing a causal graph model.
    Generation is based on a G(n,p) random graph model (each edge independently generated or not by a coin toss).
    Severity: Minor
    Found in causallib/simulation/CausalSimulator3.py - About 55 mins to fix

    Function _plot_calibration_single has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

    def _plot_calibration_single(
    y_true,
    y_prob,
    n_bins=10,
    plot_diagonal=True,
    Severity: Minor
    Found in causallib/evaluation/plots/plots.py - About 55 mins to fix

    Function make has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

    def make(
    evaluated_metrics: Union[pd.DataFrame, PropensityEvaluatorScores],
    models: Union[
    List[WeightEstimator],
    List[IndividualOutcomeEstimator],
    Severity: Minor
    Found in causallib/evaluation/results.py - About 55 mins to fix

    Function _withreplacement_match has a Cognitive Complexity of 9 (exceeds 5 allowed). Consider refactoring.
    Open

    def _withreplacement_match(self, X, a):
    matches = {} # maps treatment value to list of matches TO that value
     
    for treatment_value, knn in self.treatment_knns_.items():
    n_matchable = sum(a==treatment_value)
    Severity: Minor
    Found in causallib/estimation/matching.py - About 55 mins to fix

    Similar blocks of code found in 2 locations. Consider refactoring.
    Open

    ind_outcomes.loc[a == 0, 0] = y.loc[a == 0]
    Severity: Minor
    Found in causallib/estimation/xlearner.py and 1 other location - About 50 mins to fix
    causallib/estimation/xlearner.py on lines 231..231

    Identical blocks of code found in 2 locations. Consider refactoring.
    Open

    propensity.loc[:, columns_names[0]] = np.ones(cur_propensity.size) - cur_propensity
    Severity: Minor
    Found in causallib/simulation/CausalSimulator3.py and 1 other location - About 50 mins to fix
    causallib/simulation/CausalSimulator3.py on lines 915..915
    Severity
    Category
    Status
    Source
    Language