Skip to content

TST common test for predictions shape consistency with single target#20603

Draft
glemaitre wants to merge 2 commits intoscikit-learn:mainfrom
glemaitre:regressor_single_target
Draft

TST common test for predictions shape consistency with single target#20603
glemaitre wants to merge 2 commits intoscikit-learn:mainfrom
glemaitre:regressor_single_target

Conversation

@glemaitre
Copy link
Copy Markdown
Member

@glemaitre glemaitre commented Jul 25, 2021

follow-up #20355

Check the predictions shape consistency with y of shape (n_samples,) and (n_samples, 1).

The list of passing/failing estimators is shown below. I excluded the SearchCV and Pipeline since they depend on the inner estimators.

Details
sklearn/tests/test_common.py::test_estimators[ARDRegression()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[AdaBoostRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[BaggingRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[BayesianRidge()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[CCA()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[DecisionTreeRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[DummyRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[ElasticNet()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[ElasticNetCV()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[ExtraTreeRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[ExtraTreesRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[GammaRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[GaussianProcessRegressor()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[GradientBoostingRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[HistGradientBoostingRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[HuberRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[KNeighborsRegressor()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[KernelRidge()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[Lars()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[LarsCV()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[Lasso()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[LassoCV()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[LassoLars()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[LassoLarsCV()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[LassoLarsIC()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[LinearRegression()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[LinearSVR()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[MLPRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[MultiOutputRegressor(estimator=Ridge())-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[MultiTaskElasticNet()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[MultiTaskElasticNetCV()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[MultiTaskLasso()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[MultiTaskLassoCV()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[NuSVR()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[OrthogonalMatchingPursuit()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[OrthogonalMatchingPursuitCV()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[PLSCanonical()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[PLSRegression()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[PassiveAggressiveRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[PoissonRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[QuantileRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[RANSACRegressor(base_estimator=Ridge())-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[RadiusNeighborsRegressor()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[RandomForestRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[RegressorChain(base_estimator=Ridge())-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[Ridge()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[RidgeCV(alphas=array([0.1,1.,10.]))-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[SGDRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[SVR()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[StackingRegressor(estimators=[('est1',Ridge(alpha=0.1)),('est2',Ridge(alpha=1))])-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[TheilSenRegressor(max_subpopulation=10000)-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[TransformedTargetRegressor()-check_regressors_predict_single_target] FAILED
sklearn/tests/test_common.py::test_estimators[TweedieRegressor()-check_regressors_predict_single_target] PASSED
sklearn/tests/test_common.py::test_estimators[VotingRegressor(estimators=[('est1',Ridge(alpha=0.1)),('est2',Ridge(alpha=1))])-check_regressors_predict_single_target] PASSED

The full traceback is available here

Details
====================================================================== FAILURES =======================================================================
____________________________________________ test_estimators[CCA()-check_regressors_predict_single_target] ____________________________________________

estimator = CCA(max_iter=5, n_components=1), check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'CCA')
request = <FixtureRequest for <Function test_estimators[CCA()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'CCA')
estimator  = CCA(max_iter=5, n_components=1)
request    = <FixtureRequest for <Function test_estimators[CCA()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'CCA', regressor_orig = CCA(max_iter=5, n_components=1)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of CCA is not consistent with y.shape=(200,). We expect a shape of (200,) but predictions have a  shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'CCA'
regressor  = CCA(max_iter=5, n_components=1)
regressor_orig = CCA(max_iter=5, n_components=1)
y          = array([ 1.44842059,  0.02462946,  0.17266373, -0.4567032 , -1.1064117 ,
        0.52156598,  0.39521595,  1.45716779, ...090363,  0.4696285 , -0.05612368,  0.30389789,
       -0.96880098, -1.20408431,  0.25088783,  0.4746183 , -1.11748681])
y_pred     = array([[ 0.70341857],
       [ 0.41580605],
       [ 0.51516086],
       [-0.17535823],
       [-0.96079336],
       [...915966],
       [-0.75587075],
       [-1.27098298],
       [ 0.38843129],
       [-0.37049613],
       [-1.46128665]])

sklearn/utils/estimator_checks.py:2584: AssertionError
_________________________________ test_estimators[GaussianProcessRegressor()-check_regressors_predict_single_target] __________________________________

estimator = GaussianProcessRegressor()
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'GaussianProcessRegressor')
request = <FixtureRequest for <Function test_estimators[GaussianProcessRegressor()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'GaussianProcessRegressor')
estimator  = GaussianProcessRegressor()
request    = <FixtureRequest for <Function test_estimators[GaussianProcessRegressor()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'GaussianProcessRegressor', regressor_orig = GaussianProcessRegressor()

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of GaussianProcessRegressor is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'GaussianProcessRegressor'
regressor  = GaussianProcessRegressor(alpha=0.01, random_state=0)
regressor_orig = GaussianProcessRegressor()
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 1.43607283],
       [ 0.02564265],
       [ 0.17168575],
       [-0.45126504],
       [-1.09692214],
       [...113976],
       [-0.95972214],
       [-1.19334259],
       [ 0.24854972],
       [ 0.46966078],
       [-1.10717077]])

sklearn/utils/estimator_checks.py:2593: AssertionError
____________________________________ test_estimators[KNeighborsRegressor()-check_regressors_predict_single_target] ____________________________________

estimator = KNeighborsRegressor(), check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'KNeighborsRegressor')
request = <FixtureRequest for <Function test_estimators[KNeighborsRegressor()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'KNeighborsRegressor')
estimator  = KNeighborsRegressor()
request    = <FixtureRequest for <Function test_estimators[KNeighborsRegressor()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'KNeighborsRegressor', regressor_orig = KNeighborsRegressor()

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of KNeighborsRegressor is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'KNeighborsRegressor'
regressor  = KNeighborsRegressor()
regressor_orig = KNeighborsRegressor()
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 7.03188092e-01],
       [ 4.98803882e-01],
       [ 4.34489736e-01],
       [-2.11787597e-02],
       [-5.132...09349650e-01],
       [-5.51305137e-01],
       [ 5.47142438e-01],
       [ 2.57795837e-01],
       [-9.97464138e-01]])

sklearn/utils/estimator_checks.py:2593: AssertionError
________________________________________ test_estimators[KernelRidge()-check_regressors_predict_single_target] ________________________________________

estimator = KernelRidge(), check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'KernelRidge')
request = <FixtureRequest for <Function test_estimators[KernelRidge()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'KernelRidge')
estimator  = KernelRidge()
request    = <FixtureRequest for <Function test_estimators[KernelRidge()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'KernelRidge', regressor_orig = KernelRidge()

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of KernelRidge is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'KernelRidge'
regressor  = KernelRidge(alpha=0.01)
regressor_orig = KernelRidge()
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.64050253],
       [ 0.37861728],
       [ 0.469093  ],
       [-0.15967103],
       [-0.87485222],
       [...631252],
       [-0.68826007],
       [-1.15730675],
       [ 0.35371003],
       [-0.33734535],
       [-1.33060375]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_____________________________________ test_estimators[LinearRegression()-check_regressors_predict_single_target] ______________________________________

estimator = LinearRegression(), check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'LinearRegression')
request = <FixtureRequest for <Function test_estimators[LinearRegression()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'LinearRegression')
estimator  = LinearRegression()
request    = <FixtureRequest for <Function test_estimators[LinearRegression()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'LinearRegression', regressor_orig = LinearRegression()

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of LinearRegression is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'LinearRegression'
regressor  = LinearRegression()
regressor_orig = LinearRegression()
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.64053854],
       [ 0.37863629],
       [ 0.46910957],
       [-0.1596826 ],
       [-0.87490606],
       [...63266 ],
       [-0.68830191],
       [-1.1573672 ],
       [ 0.35370862],
       [-0.33737672],
       [-1.33065924]])

sklearn/utils/estimator_checks.py:2593: AssertionError
___________________________ test_estimators[MultiOutputRegressor(estimator=Ridge())-check_regressors_predict_single_target] ___________________________

estimator = MultiOutputRegressor(estimator=Ridge())
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'MultiOutputRegressor')
request = <FixtureRequest for <Function test_estimators[MultiOutputRegressor(estimator=Ridge())-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'MultiOutputRegressor')
estimator  = MultiOutputRegressor(estimator=Ridge())
request    = <FixtureRequest for <Function test_estimators[MultiOutputRegressor(estimator=Ridge())-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'MultiOutputRegressor', regressor_orig = MultiOutputRegressor(estimator=Ridge())

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
>       assert y.ndim == 1
E       AssertionError

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'MultiOutputRegressor'
regressor  = MultiOutputRegressor(estimator=Ridge())
regressor_orig = MultiOutputRegressor(estimator=Ridge())
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])

sklearn/utils/estimator_checks.py:2573: AssertionError
____________________________________ test_estimators[MultiTaskElasticNet()-check_regressors_predict_single_target] ____________________________________

estimator = MultiTaskElasticNet(max_iter=5)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'MultiTaskElasticNet')
request = <FixtureRequest for <Function test_estimators[MultiTaskElasticNet()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'MultiTaskElasticNet')
estimator  = MultiTaskElasticNet(max_iter=5)
request    = <FixtureRequest for <Function test_estimators[MultiTaskElasticNet()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'MultiTaskElasticNet', regressor_orig = MultiTaskElasticNet(max_iter=5)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
>       assert y.ndim == 1
E       AssertionError

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'MultiTaskElasticNet'
regressor  = MultiTaskElasticNet(max_iter=5)
regressor_orig = MultiTaskElasticNet(max_iter=5)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])

sklearn/utils/estimator_checks.py:2573: AssertionError
___________________________________ test_estimators[MultiTaskElasticNetCV()-check_regressors_predict_single_target] ___________________________________

estimator = MultiTaskElasticNetCV(cv=3, max_iter=5)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'MultiTaskElasticNetCV')
request = <FixtureRequest for <Function test_estimators[MultiTaskElasticNetCV()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'MultiTaskElasticNetCV')
estimator  = MultiTaskElasticNetCV(cv=3, max_iter=5)
request    = <FixtureRequest for <Function test_estimators[MultiTaskElasticNetCV()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'MultiTaskElasticNetCV', regressor_orig = MultiTaskElasticNetCV(cv=3, max_iter=5)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
>       assert y.ndim == 1
E       AssertionError

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'MultiTaskElasticNetCV'
regressor  = MultiTaskElasticNetCV(cv=3, max_iter=5)
regressor_orig = MultiTaskElasticNetCV(cv=3, max_iter=5)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])

sklearn/utils/estimator_checks.py:2573: AssertionError
______________________________________ test_estimators[MultiTaskLasso()-check_regressors_predict_single_target] _______________________________________

estimator = MultiTaskLasso(max_iter=5), check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'MultiTaskLasso')
request = <FixtureRequest for <Function test_estimators[MultiTaskLasso()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'MultiTaskLasso')
estimator  = MultiTaskLasso(max_iter=5)
request    = <FixtureRequest for <Function test_estimators[MultiTaskLasso()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'MultiTaskLasso', regressor_orig = MultiTaskLasso(max_iter=5)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
>       assert y.ndim == 1
E       AssertionError

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'MultiTaskLasso'
regressor  = MultiTaskLasso(max_iter=5)
regressor_orig = MultiTaskLasso(max_iter=5)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])

sklearn/utils/estimator_checks.py:2573: AssertionError
_____________________________________ test_estimators[MultiTaskLassoCV()-check_regressors_predict_single_target] ______________________________________

estimator = MultiTaskLassoCV(cv=3, max_iter=5)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'MultiTaskLassoCV')
request = <FixtureRequest for <Function test_estimators[MultiTaskLassoCV()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'MultiTaskLassoCV')
estimator  = MultiTaskLassoCV(cv=3, max_iter=5)
request    = <FixtureRequest for <Function test_estimators[MultiTaskLassoCV()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'MultiTaskLassoCV', regressor_orig = MultiTaskLassoCV(cv=3, max_iter=5)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
>       assert y.ndim == 1
E       AssertionError

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'MultiTaskLassoCV'
regressor  = MultiTaskLassoCV(cv=3, max_iter=5)
regressor_orig = MultiTaskLassoCV(cv=3, max_iter=5)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])

sklearn/utils/estimator_checks.py:2573: AssertionError
_______________________________________ test_estimators[PLSCanonical()-check_regressors_predict_single_target] ________________________________________

estimator = PLSCanonical(max_iter=5, n_components=1)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'PLSCanonical')
request = <FixtureRequest for <Function test_estimators[PLSCanonical()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'PLSCanonical')
estimator  = PLSCanonical(max_iter=5, n_components=1)
request    = <FixtureRequest for <Function test_estimators[PLSCanonical()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'PLSCanonical', regressor_orig = PLSCanonical(max_iter=5, n_components=1)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of PLSCanonical is not consistent with y.shape=(200,). We expect a shape of (200,) but predictions have a  shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'PLSCanonical'
regressor  = PLSCanonical(max_iter=5, n_components=1)
regressor_orig = PLSCanonical(max_iter=5, n_components=1)
y          = array([ 1.44842059,  0.02462946,  0.17266373, -0.4567032 , -1.1064117 ,
        0.52156598,  0.39521595,  1.45716779, ...090363,  0.4696285 , -0.05612368,  0.30389789,
       -0.96880098, -1.20408431,  0.25088783,  0.4746183 , -1.11748681])
y_pred     = array([[ 0.6060054 ],
       [ 0.47097687],
       [ 0.67081485],
       [-0.11044873],
       [-0.75074555],
       [...252769],
       [-0.67251118],
       [-1.28084969],
       [ 0.80332846],
       [-0.08654241],
       [-1.80608003]])

sklearn/utils/estimator_checks.py:2584: AssertionError
_______________________________________ test_estimators[PLSRegression()-check_regressors_predict_single_target] _______________________________________

estimator = PLSRegression(max_iter=5, n_components=1)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'PLSRegression')
request = <FixtureRequest for <Function test_estimators[PLSRegression()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'PLSRegression')
estimator  = PLSRegression(max_iter=5, n_components=1)
request    = <FixtureRequest for <Function test_estimators[PLSRegression()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'PLSRegression', regressor_orig = PLSRegression(max_iter=5, n_components=1)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of PLSRegression is not consistent with y.shape=(200,). We expect a shape of (200,) but predictions have a  shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'PLSRegression'
regressor  = PLSRegression(max_iter=5, n_components=1)
regressor_orig = PLSRegression(max_iter=5, n_components=1)
y          = array([ 1.44842059,  0.02462946,  0.17266373, -0.4567032 , -1.1064117 ,
        0.52156598,  0.39521595,  1.45716779, ...090363,  0.4696285 , -0.05612368,  0.30389789,
       -0.96880098, -1.20408431,  0.25088783,  0.4746183 , -1.11748681])
y_pred     = array([[ 0.51363173],
       [ 0.39918566],
       [ 0.56856225],
       [-0.09361298],
       [-0.63630908],
       [...0014  ],
       [-0.57000001],
       [-1.08560921],
       [ 0.68087675],
       [-0.07335071],
       [-1.53077847]])

sklearn/utils/estimator_checks.py:2584: AssertionError
___________________________ test_estimators[RANSACRegressor(base_estimator=Ridge())-check_regressors_predict_single_target] ___________________________

estimator = RANSACRegressor(base_estimator=Ridge(), max_trials=10)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RANSACRegressor')
request = <FixtureRequest for <Function test_estimators[RANSACRegressor(base_estimator=Ridge())-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RANSACRegressor')
estimator  = RANSACRegressor(base_estimator=Ridge(), max_trials=10)
request    = <FixtureRequest for <Function test_estimators[RANSACRegressor(base_estimator=Ridge())-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'RANSACRegressor', regressor_orig = RANSACRegressor(base_estimator=Ridge(), max_trials=10)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of RANSACRegressor is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'RANSACRegressor'
regressor  = RANSACRegressor(base_estimator=Ridge(), max_trials=10, random_state=0)
regressor_orig = RANSACRegressor(base_estimator=Ridge(), max_trials=10)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 5.37741736e-01],
       [ 4.33413224e-01],
       [ 2.59363700e-01],
       [-5.57968654e-02],
       [-4.712...25876395e-01],
       [-1.03453610e+00],
       [ 8.47355797e-01],
       [ 2.44500362e-01],
       [-1.40208628e+00]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_________________________________ test_estimators[RadiusNeighborsRegressor()-check_regressors_predict_single_target] __________________________________

estimator = RadiusNeighborsRegressor()
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RadiusNeighborsRegressor')
request = <FixtureRequest for <Function test_estimators[RadiusNeighborsRegressor()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RadiusNeighborsRegressor')
estimator  = RadiusNeighborsRegressor()
request    = <FixtureRequest for <Function test_estimators[RadiusNeighborsRegressor()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'RadiusNeighborsRegressor', regressor_orig = RadiusNeighborsRegressor()

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of RadiusNeighborsRegressor is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'RadiusNeighborsRegressor'
regressor  = RadiusNeighborsRegressor()
regressor_orig = RadiusNeighborsRegressor()
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])

sklearn/utils/estimator_checks.py:2593: AssertionError
___________________________ test_estimators[RegressorChain(base_estimator=Ridge())-check_regressors_predict_single_target] ____________________________

estimator = RegressorChain(base_estimator=Ridge(), cv=3)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RegressorChain')
request = <FixtureRequest for <Function test_estimators[RegressorChain(base_estimator=Ridge())-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RegressorChain')
estimator  = RegressorChain(base_estimator=Ridge(), cv=3)
request    = <FixtureRequest for <Function test_estimators[RegressorChain(base_estimator=Ridge())-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'RegressorChain', regressor_orig = RegressorChain(base_estimator=Ridge(), cv=3)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
>       assert y.ndim == 1
E       AssertionError

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'RegressorChain'
regressor  = RegressorChain(base_estimator=Ridge(), cv=3)
regressor_orig = RegressorChain(base_estimator=Ridge(), cv=3)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])

sklearn/utils/estimator_checks.py:2573: AssertionError
___________________________________________ test_estimators[Ridge()-check_regressors_predict_single_target] ___________________________________________

estimator = Ridge(), check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'Ridge')
request = <FixtureRequest for <Function test_estimators[Ridge()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'Ridge')
estimator  = Ridge()
request    = <FixtureRequest for <Function test_estimators[Ridge()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'Ridge', regressor_orig = Ridge()

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of Ridge is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'Ridge'
regressor  = Ridge(alpha=0.01, random_state=0)
regressor_orig = Ridge()
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.64050253],
       [ 0.37861728],
       [ 0.469093  ],
       [-0.15967103],
       [-0.87485222],
       [...631252],
       [-0.68826007],
       [-1.15730675],
       [ 0.35371003],
       [-0.33734535],
       [-1.33060375]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_____________________________ test_estimators[RidgeCV(alphas=array([0.1,1.,10.]))-check_regressors_predict_single_target] _____________________________

estimator = RidgeCV(alphas=array([ 0.1,  1. , 10. ]))
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RidgeCV')
request = <FixtureRequest for <Function test_estimators[RidgeCV(alphas=array([0.1,1.,10.]))-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RidgeCV')
estimator  = RidgeCV(alphas=array([ 0.1,  1. , 10. ]))
request    = <FixtureRequest for <Function test_estimators[RidgeCV(alphas=array([0.1,1.,10.]))-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'RidgeCV', regressor_orig = RidgeCV(alphas=array([ 0.1,  1. , 10. ]))

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of RidgeCV is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'RidgeCV'
regressor  = RidgeCV(alphas=array([ 0.1,  1. , 10. ]))
regressor_orig = RidgeCV(alphas=array([ 0.1,  1. , 10. ]))
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.63695684],
       [ 0.37674617],
       [ 0.46745671],
       [-0.15853397],
       [-0.8695535 ],
       [...492207],
       [-0.68414443],
       [-1.15135559],
       [ 0.35383855],
       [-0.33426329],
       [-1.32513407]])

sklearn/utils/estimator_checks.py:2593: AssertionError
________________________________ test_estimators[TransformedTargetRegressor()-check_regressors_predict_single_target] _________________________________

estimator = TransformedTargetRegressor()
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'TransformedTargetRegressor')
request = <FixtureRequest for <Function test_estimators[TransformedTargetRegressor()-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_tested_estimators()))
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, FutureWarning)
        ):
            _set_checking_parameters(estimator)
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'TransformedTargetRegressor')
estimator  = TransformedTargetRegressor()
request    = <FixtureRequest for <Function test_estimators[TransformedTargetRegressor()-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:107: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'TransformedTargetRegressor', regressor_orig = TransformedTargetRegressor()

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of TransformedTargetRegressor is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'TransformedTargetRegressor'
regressor  = TransformedTargetRegressor()
regressor_orig = TransformedTargetRegressor()
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.64053854],
       [ 0.37863629],
       [ 0.46910957],
       [-0.1596826 ],
       [-0.87490606],
       [...63266 ],
       [-0.68830191],
       [-1.1573672 ],
       [ 0.35370862],
       [-0.33737672],
       [-1.33065924]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_____________ test_search_cv[GridSearchCV(cv=2,estimator=Ridge(),param_grid={'alpha':[0.1,1.0]})-check_regressors_predict_single_target] ______________

estimator = GridSearchCV(cv=2, estimator=Ridge(), param_grid={'alpha': [0.1, 1.0]})
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'GridSearchCV')
request = <FixtureRequest for <Function test_search_cv[GridSearchCV(cv=2,estimator=Ridge(),param_grid={'alpha':[0.1,1.0]})-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_generate_search_cv_instances()))
    def test_search_cv(estimator, check, request):
        # Common tests for SearchCV instances
        # We have a separate test because those meta-estimators can accept a
        # wide range of base estimators (classifiers, regressors, pipelines)
        with ignore_warnings(
            category=(
                FutureWarning,
                ConvergenceWarning,
                UserWarning,
                FutureWarning,
                FitFailedWarning,
            )
        ):
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'GridSearchCV')
estimator  = GridSearchCV(cv=2, estimator=Ridge(), param_grid={'alpha': [0.1, 1.0]})
request    = <FixtureRequest for <Function test_search_cv[GridSearchCV(cv=2,estimator=Ridge(),param_grid={'alpha':[0.1,1.0]})-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:301: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'GridSearchCV', regressor_orig = GridSearchCV(cv=2, estimator=Ridge(), param_grid={'alpha': [0.1, 1.0]})

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of GridSearchCV is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'GridSearchCV'
regressor  = GridSearchCV(cv=2, estimator=Ridge(), param_grid={'alpha': [0.1, 1.0]})
regressor_orig = GridSearchCV(cv=2, estimator=Ridge(), param_grid={'alpha': [0.1, 1.0]})
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.63695684],
       [ 0.37674617],
       [ 0.46745671],
       [-0.15853397],
       [-0.8695535 ],
       [...492207],
       [-0.68414443],
       [-1.15135559],
       [ 0.35383855],
       [-0.33426329],
       [-1.32513407]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_ test_search_cv[HalvingGridSearchCV(cv=2,estimator=Ridge(),min_resources='smallest',param_grid={'alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target0] _

estimator = HalvingGridSearchCV(cv=2, estimator=Ridge(), min_resources='smallest',
                    param_grid={'alpha': [0.1, 1.0]}, random_state=0)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'HalvingGridSearchCV')
request = <FixtureRequest for <Function test_search_cv[HalvingGridSearchCV(cv=2,estimator=Ridge(),min_resources='smallest',param_grid={'alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target0]>>

    @parametrize_with_checks(list(_generate_search_cv_instances()))
    def test_search_cv(estimator, check, request):
        # Common tests for SearchCV instances
        # We have a separate test because those meta-estimators can accept a
        # wide range of base estimators (classifiers, regressors, pipelines)
        with ignore_warnings(
            category=(
                FutureWarning,
                ConvergenceWarning,
                UserWarning,
                FutureWarning,
                FitFailedWarning,
            )
        ):
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'HalvingGridSearchCV')
estimator  = HalvingGridSearchCV(cv=2, estimator=Ridge(), min_resources='smallest',
                    param_grid={'alpha': [0.1, 1.0]}, random_state=0)
request    = <FixtureRequest for <Function test_search_cv[HalvingGridSearchCV(cv=2,estimator=Ridge(),min_resources='smallest',param_grid={'alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target0]>>

sklearn/tests/test_common.py:301: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'HalvingGridSearchCV'
regressor_orig = HalvingGridSearchCV(cv=2, estimator=Ridge(), min_resources='smallest',
                    param_grid={'alpha': [0.1, 1.0]}, random_state=0)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of HalvingGridSearchCV is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'HalvingGridSearchCV'
regressor  = HalvingGridSearchCV(cv=2, estimator=Ridge(), min_resources='smallest',
                    param_grid={'alpha': [0.1, 1.0]}, random_state=0)
regressor_orig = HalvingGridSearchCV(cv=2, estimator=Ridge(), min_resources='smallest',
                    param_grid={'alpha': [0.1, 1.0]}, random_state=0)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.63695684],
       [ 0.37674617],
       [ 0.46745671],
       [-0.15853397],
       [-0.8695535 ],
       [...492207],
       [-0.68414443],
       [-1.15135559],
       [ 0.35383855],
       [-0.33426329],
       [-1.32513407]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_ test_search_cv[RandomizedSearchCV(cv=2,estimator=Ridge(),param_distributions={'alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target] _

estimator = RandomizedSearchCV(cv=2, estimator=Ridge(),
                   param_distributions={'alpha': [0.1, 1.0]}, random_state=0)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RandomizedSearchCV')
request = <FixtureRequest for <Function test_search_cv[RandomizedSearchCV(cv=2,estimator=Ridge(),param_distributions={'alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_generate_search_cv_instances()))
    def test_search_cv(estimator, check, request):
        # Common tests for SearchCV instances
        # We have a separate test because those meta-estimators can accept a
        # wide range of base estimators (classifiers, regressors, pipelines)
        with ignore_warnings(
            category=(
                FutureWarning,
                ConvergenceWarning,
                UserWarning,
                FutureWarning,
                FitFailedWarning,
            )
        ):
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RandomizedSearchCV')
estimator  = RandomizedSearchCV(cv=2, estimator=Ridge(),
                   param_distributions={'alpha': [0.1, 1.0]}, random_state=0)
request    = <FixtureRequest for <Function test_search_cv[RandomizedSearchCV(cv=2,estimator=Ridge(),param_distributions={'alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:301: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'RandomizedSearchCV'
regressor_orig = RandomizedSearchCV(cv=2, estimator=Ridge(),
                   param_distributions={'alpha': [0.1, 1.0]}, random_state=0)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of RandomizedSearchCV is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'RandomizedSearchCV'
regressor  = RandomizedSearchCV(cv=2, estimator=Ridge(),
                   param_distributions={'alpha': [0.1, 1.0]}, random_state=0)
regressor_orig = RandomizedSearchCV(cv=2, estimator=Ridge(),
                   param_distributions={'alpha': [0.1, 1.0]}, random_state=0)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.63695684],
       [ 0.37674617],
       [ 0.46745671],
       [-0.15853397],
       [-0.8695535 ],
       [...492207],
       [-0.68414443],
       [-1.15135559],
       [ 0.35383855],
       [-0.33426329],
       [-1.32513407]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_ test_search_cv[HalvingGridSearchCV(cv=2,estimator=Ridge(),min_resources='smallest',param_grid={'alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target1] _

estimator = HalvingGridSearchCV(cv=2, estimator=Ridge(), min_resources='smallest',
                    param_grid={'alpha': [0.1, 1.0]}, random_state=0)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'HalvingGridSearchCV')
request = <FixtureRequest for <Function test_search_cv[HalvingGridSearchCV(cv=2,estimator=Ridge(),min_resources='smallest',param_grid={'alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target1]>>

    @parametrize_with_checks(list(_generate_search_cv_instances()))
    def test_search_cv(estimator, check, request):
        # Common tests for SearchCV instances
        # We have a separate test because those meta-estimators can accept a
        # wide range of base estimators (classifiers, regressors, pipelines)
        with ignore_warnings(
            category=(
                FutureWarning,
                ConvergenceWarning,
                UserWarning,
                FutureWarning,
                FitFailedWarning,
            )
        ):
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'HalvingGridSearchCV')
estimator  = HalvingGridSearchCV(cv=2, estimator=Ridge(), min_resources='smallest',
                    param_grid={'alpha': [0.1, 1.0]}, random_state=0)
request    = <FixtureRequest for <Function test_search_cv[HalvingGridSearchCV(cv=2,estimator=Ridge(),min_resources='smallest',param_grid={'alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target1]>>

sklearn/tests/test_common.py:301: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'HalvingGridSearchCV'
regressor_orig = HalvingGridSearchCV(cv=2, estimator=Ridge(), min_resources='smallest',
                    param_grid={'alpha': [0.1, 1.0]}, random_state=0)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of HalvingGridSearchCV is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'HalvingGridSearchCV'
regressor  = HalvingGridSearchCV(cv=2, estimator=Ridge(), min_resources='smallest',
                    param_grid={'alpha': [0.1, 1.0]}, random_state=0)
regressor_orig = HalvingGridSearchCV(cv=2, estimator=Ridge(), min_resources='smallest',
                    param_grid={'alpha': [0.1, 1.0]}, random_state=0)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.63695684],
       [ 0.37674617],
       [ 0.46745671],
       [-0.15853397],
       [-0.8695535 ],
       [...492207],
       [-0.68414443],
       [-1.15135559],
       [ 0.35383855],
       [-0.33426329],
       [-1.32513407]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_ test_search_cv[GridSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[('pca',PCA()),('ridge',Ridge())]),param_grid={'ridge__alpha':[0.1,1.0]})-check_regressors_predict_single_target] _

estimator = GridSearchCV(cv=2, error_score='raise',
             estimator=Pipeline(steps=[('pca', PCA()), ('ridge', Ridge())]),
             param_grid={'ridge__alpha': [0.1, 1.0]})
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'GridSearchCV')
request = <FixtureRequest for <Function test_search_cv[GridSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[('pca',PCA()),('ridge',Ridge())]),param_grid={'ridge__alpha':[0.1,1.0]})-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_generate_search_cv_instances()))
    def test_search_cv(estimator, check, request):
        # Common tests for SearchCV instances
        # We have a separate test because those meta-estimators can accept a
        # wide range of base estimators (classifiers, regressors, pipelines)
        with ignore_warnings(
            category=(
                FutureWarning,
                ConvergenceWarning,
                UserWarning,
                FutureWarning,
                FitFailedWarning,
            )
        ):
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'GridSearchCV')
estimator  = GridSearchCV(cv=2, error_score='raise',
             estimator=Pipeline(steps=[('pca', PCA()), ('ridge', Ridge())]),
             param_grid={'ridge__alpha': [0.1, 1.0]})
request    = <FixtureRequest for <Function test_search_cv[GridSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[('pca',PCA()),('ridge',Ridge())]),param_grid={'ridge__alpha':[0.1,1.0]})-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:301: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'GridSearchCV'
regressor_orig = GridSearchCV(cv=2, error_score='raise',
             estimator=Pipeline(steps=[('pca', PCA()), ('ridge', Ridge())]),
             param_grid={'ridge__alpha': [0.1, 1.0]})

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of GridSearchCV is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'GridSearchCV'
regressor  = GridSearchCV(cv=2, error_score='raise',
             estimator=Pipeline(steps=[('pca', PCA()), ('ridge', Ridge())]),
             param_grid={'ridge__alpha': [0.1, 1.0]})
regressor_orig = GridSearchCV(cv=2, error_score='raise',
             estimator=Pipeline(steps=[('pca', PCA()), ('ridge', Ridge())]),
             param_grid={'ridge__alpha': [0.1, 1.0]})
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.63695684],
       [ 0.37674617],
       [ 0.46745671],
       [-0.15853397],
       [-0.8695535 ],
       [...492207],
       [-0.68414443],
       [-1.15135559],
       [ 0.35383855],
       [-0.33426329],
       [-1.32513407]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_ test_search_cv[HalvingGridSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[('pca',PCA()),('ridge',Ridge())]),min_resources='smallest',param_grid={'ridge__alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target] _

estimator = HalvingGridSearchCV(cv=2, error_score='raise',
                    estimator=Pipeline(steps=[('pca', PCA()),
         ...                 min_resources='smallest',
                    param_grid={'ridge__alpha': [0.1, 1.0]}, random_state=0)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'HalvingGridSearchCV')
request = <FixtureRequest for <Function test_search_cv[HalvingGridSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[('...in_resources='smallest',param_grid={'ridge__alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_generate_search_cv_instances()))
    def test_search_cv(estimator, check, request):
        # Common tests for SearchCV instances
        # We have a separate test because those meta-estimators can accept a
        # wide range of base estimators (classifiers, regressors, pipelines)
        with ignore_warnings(
            category=(
                FutureWarning,
                ConvergenceWarning,
                UserWarning,
                FutureWarning,
                FitFailedWarning,
            )
        ):
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'HalvingGridSearchCV')
estimator  = HalvingGridSearchCV(cv=2, error_score='raise',
                    estimator=Pipeline(steps=[('pca', PCA()),
         ...                 min_resources='smallest',
                    param_grid={'ridge__alpha': [0.1, 1.0]}, random_state=0)
request    = <FixtureRequest for <Function test_search_cv[HalvingGridSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[('...in_resources='smallest',param_grid={'ridge__alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:301: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'HalvingGridSearchCV'
regressor_orig = HalvingGridSearchCV(cv=2, error_score='raise',
                    estimator=Pipeline(steps=[('pca', PCA()),
         ...                 min_resources='smallest',
                    param_grid={'ridge__alpha': [0.1, 1.0]}, random_state=0)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of HalvingGridSearchCV is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'HalvingGridSearchCV'
regressor  = HalvingGridSearchCV(cv=2, error_score='raise',
                    estimator=Pipeline(steps=[('pca', PCA()),
         ...                 min_resources='smallest',
                    param_grid={'ridge__alpha': [0.1, 1.0]}, random_state=0)
regressor_orig = HalvingGridSearchCV(cv=2, error_score='raise',
                    estimator=Pipeline(steps=[('pca', PCA()),
         ...                 min_resources='smallest',
                    param_grid={'ridge__alpha': [0.1, 1.0]}, random_state=0)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.63695684],
       [ 0.37674617],
       [ 0.46745671],
       [-0.15853397],
       [-0.8695535 ],
       [...492207],
       [-0.68414443],
       [-1.15135559],
       [ 0.35383855],
       [-0.33426329],
       [-1.32513407]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_ test_search_cv[RandomizedSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[('pca',PCA()),('ridge',Ridge())]),param_distributions={'ridge__alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target] _

estimator = RandomizedSearchCV(cv=2, error_score='raise',
                   estimator=Pipeline(steps=[('pca', PCA()),
           ...e', Ridge())]),
                   param_distributions={'ridge__alpha': [0.1, 1.0]},
                   random_state=0)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RandomizedSearchCV')
request = <FixtureRequest for <Function test_search_cv[RandomizedSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[('p...ge',Ridge())]),param_distributions={'ridge__alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_generate_search_cv_instances()))
    def test_search_cv(estimator, check, request):
        # Common tests for SearchCV instances
        # We have a separate test because those meta-estimators can accept a
        # wide range of base estimators (classifiers, regressors, pipelines)
        with ignore_warnings(
            category=(
                FutureWarning,
                ConvergenceWarning,
                UserWarning,
                FutureWarning,
                FitFailedWarning,
            )
        ):
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'RandomizedSearchCV')
estimator  = RandomizedSearchCV(cv=2, error_score='raise',
                   estimator=Pipeline(steps=[('pca', PCA()),
           ...e', Ridge())]),
                   param_distributions={'ridge__alpha': [0.1, 1.0]},
                   random_state=0)
request    = <FixtureRequest for <Function test_search_cv[RandomizedSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[('p...ge',Ridge())]),param_distributions={'ridge__alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:301: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'RandomizedSearchCV'
regressor_orig = RandomizedSearchCV(cv=2, error_score='raise',
                   estimator=Pipeline(steps=[('pca', PCA()),
           ...e', Ridge())]),
                   param_distributions={'ridge__alpha': [0.1, 1.0]},
                   random_state=0)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of RandomizedSearchCV is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'RandomizedSearchCV'
regressor  = RandomizedSearchCV(cv=2, error_score='raise',
                   estimator=Pipeline(steps=[('pca', PCA()),
           ...e', Ridge())]),
                   param_distributions={'ridge__alpha': [0.1, 1.0]},
                   random_state=0)
regressor_orig = RandomizedSearchCV(cv=2, error_score='raise',
                   estimator=Pipeline(steps=[('pca', PCA()),
           ...e', Ridge())]),
                   param_distributions={'ridge__alpha': [0.1, 1.0]},
                   random_state=0)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.63695684],
       [ 0.37674617],
       [ 0.46745671],
       [-0.15853397],
       [-0.8695535 ],
       [...492207],
       [-0.68414443],
       [-1.15135559],
       [ 0.35383855],
       [-0.33426329],
       [-1.32513407]])

sklearn/utils/estimator_checks.py:2593: AssertionError
_ test_search_cv[HalvingRandomSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[('pca',PCA()),('ridge',Ridge())]),param_distributions={'ridge__alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target] _

estimator = HalvingRandomSearchCV(cv=2, error_score='raise',
                      estimator=Pipeline(steps=[('pca', PCA()),
     ...dge())]),
                      param_distributions={'ridge__alpha': [0.1, 1.0]},
                      random_state=0)
check = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'HalvingRandomSearchCV')
request = <FixtureRequest for <Function test_search_cv[HalvingRandomSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[...ge',Ridge())]),param_distributions={'ridge__alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target]>>

    @parametrize_with_checks(list(_generate_search_cv_instances()))
    def test_search_cv(estimator, check, request):
        # Common tests for SearchCV instances
        # We have a separate test because those meta-estimators can accept a
        # wide range of base estimators (classifiers, regressors, pipelines)
        with ignore_warnings(
            category=(
                FutureWarning,
                ConvergenceWarning,
                UserWarning,
                FutureWarning,
                FitFailedWarning,
            )
        ):
>           check(estimator)

check      = functools.partial(<function check_regressors_predict_single_target at 0x11dc2d1f0>, 'HalvingRandomSearchCV')
estimator  = HalvingRandomSearchCV(cv=2, error_score='raise',
                      estimator=Pipeline(steps=[('pca', PCA()),
     ...dge())]),
                      param_distributions={'ridge__alpha': [0.1, 1.0]},
                      random_state=0)
request    = <FixtureRequest for <Function test_search_cv[HalvingRandomSearchCV(cv=2,error_score='raise',estimator=Pipeline(steps=[...ge',Ridge())]),param_distributions={'ridge__alpha':[0.1,1.0]},random_state=0)-check_regressors_predict_single_target]>>

sklearn/tests/test_common.py:301: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

name = 'HalvingRandomSearchCV'
regressor_orig = HalvingRandomSearchCV(cv=2, error_score='raise',
                      estimator=Pipeline(steps=[('pca', PCA()),
     ...dge())]),
                      param_distributions={'ridge__alpha': [0.1, 1.0]},
                      random_state=0)

    def check_regressors_predict_single_target(name, regressor_orig):
        # check the consistency of the prediction obtained from regressors with
        # single target.
        X, y = _regression_dataset()
        X = _pairwise_estimator_convert_X(X, regressor_orig)
        y = scale(y)  # X is already scaled
        regressor = clone(regressor_orig)
        y = _enforce_estimator_tags_y(regressor, y)
        assert y.ndim == 1
    
        if not hasattr(regressor, "alphas") and hasattr(regressor, "alpha"):
            # linear regressors need to set alpha, but not generalized CV ones
            regressor.alpha = 0.01
        if name == "PassiveAggressiveRegressor":
            regressor.C = 0.01
    
        set_random_state(regressor)
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
        assert y_pred.shape == y.shape, (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {y.shape} but predictions have a "
            f" shape of {y_pred.shape}."
        )
    
        y = y[:, np.newaxis]
        regressor.fit(X, y)
        y_pred = regressor.predict(X)
>       assert y_pred.shape == (y.shape[0],), (
            f"The shape of the prediction of {name} is not consistent with "
            f"y.shape={y.shape}. We expect a shape of {(y.shape[0],)} but predictions have "
            f"a shape of {y_pred.shape}."
        )
E       AssertionError: The shape of the prediction of HalvingRandomSearchCV is not consistent with y.shape=(200, 1). We expect a shape of (200,) but predictions have a shape of (200, 1).

X          = array([[ 0.29213817,  1.31098939, -1.34589423, ..., -0.13099541,
         1.02617223,  0.52005686],
       [-1.4558723...193, -0.91441039],
       [ 2.59455523,  1.14231106, -0.15404749, ..., -0.05455829,
         0.41403923,  0.39284481]])
name       = 'HalvingRandomSearchCV'
regressor  = HalvingRandomSearchCV(cv=2, error_score='raise',
                      estimator=Pipeline(steps=[('pca', PCA()),
     ...dge())]),
                      param_distributions={'ridge__alpha': [0.1, 1.0]},
                      random_state=0)
regressor_orig = HalvingRandomSearchCV(cv=2, error_score='raise',
                      estimator=Pipeline(steps=[('pca', PCA()),
     ...dge())]),
                      param_distributions={'ridge__alpha': [0.1, 1.0]},
                      random_state=0)
y          = array([[ 1.44842059],
       [ 0.02462946],
       [ 0.17266373],
       [-0.4567032 ],
       [-1.1064117 ],
       [...389789],
       [-0.96880098],
       [-1.20408431],
       [ 0.25088783],
       [ 0.4746183 ],
       [-1.11748681]])
y_pred     = array([[ 0.63695684],
       [ 0.37674617],
       [ 0.46745671],
       [-0.15853397],
       [-0.8695535 ],
       [...492207],
       [-0.68414443],
       [-1.15135559],
       [ 0.35383855],
       [-0.33426329],
       [-1.32513407]])

sklearn/utils/estimator_checks.py:2593: AssertionError

@glemaitre glemaitre marked this pull request as draft July 25, 2021 21:13
@NicolasHug
Copy link
Copy Markdown
Member

NicolasHug commented Jul 26, 2021

One important thing to note is that if we start adopting the output_shape == input_shape convention, then we can't use the y_pred.ndim == 1 iff is_single_target assumption anymore. We currently rely on that assumption quite a bit I think, sometimes implicitly as is the case for the original failure in #19352.

These issues will likely be hard to prevent.

Forcing is_single_target => output.ndim == 1 is a bit more restrictive but will probably prevent future bugs and headaches

@glemaitre
Copy link
Copy Markdown
Member Author

Forcing is_single_target => output.ndim == 1 is a bit more restrictive but will probably prevent future bugs and headaches

Indeed, I feel that this is something natural. I am only wondering if there is any "historical" reason for not making this assumption :)

@NicolasHug I also opened #20604 that is in the same spirit.

Copy link
Copy Markdown
Member

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test failure are no longer available on Azure Pipelines ("Build not found."). Could you please retrigger the build and copy a the test failure log in a collapsed "details" section at the top of this PR so that we get a shared understanding of the current state of the scikit-learn regressor behaviors?

Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
@ogrisel
Copy link
Copy Markdown
Member

ogrisel commented Aug 6, 2021

I have the feeling that for regressors that supports multiple outputs, I would find it more natural for them to have a predict output that has the same shape as the y passed at training time (instead of special casing n_targets == 1 and automatically squeezing in that case).

At least we would give the user the full control of that shape they want for those. But I would love to see the complete list of estimators that are impacted by this and what are their current behavior.

@glemaitre
Copy link
Copy Markdown
Member Author

I added the details in the original description.

@amueller
Copy link
Copy Markdown
Member

Sounds to me like the choice is between:

  1. input shape = output shape for all estimators
  2. input shape = output shape for multi-output estimators, 1d for single target estimators
  3. always output squeezed, i.e. 1d for single target
  4. output 1d in single target estimators and 2d in multi-output estimators

One thing that people love about sklearn is that models are interchangeable so I think 2 and 4 are not the best, also they require the user to know implementation details.

Between 1 and 3, I'd prefer not to be dependent on the input shape, as it makes writing library code much easier. Otherwise everybody consuming sklearn will need to consider both outcomes.
It does require special-casing when doing multi-output and the number of outputs can be 1, but I don't think that's so bad.

@ogrisel
Copy link
Copy Markdown
Member

ogrisel commented Nov 29, 2021

  1. output is 1d in single target estimators and 1d or 2d for multiple target regressors depennding on the memorized shape of y_train to give full control to the user.

@jjerphan jjerphan self-requested a review November 29, 2021 22:14
@adrinjalali
Copy link
Copy Markdown
Member

I understand giving full control to the user is a nice thing, but I'm not sure if doing that via checking the shape of the input is the way to go. As a user, I'd expect to either pass a constructor parameter or set a state in the estimator (maybe by calling set_output_shape), rather than depending on the input shape.

I'm in favor of Andy's suggestion in terms of where we want to go (option 3).

In the meantime, I think we can set a global config variable which sets the output shape for the ones which need to go through deprecation, and once deprecation period is over, that config is not needed, or we keep it as an option for users.

@adrinjalali
Copy link
Copy Markdown
Member

bump on this one, would be nice to move it forward.

@glemaitre glemaitre mentioned this pull request Mar 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants