One of the nightly builds has failed with:
_____________________________ test_sag_regressor ______________________________
@pytest.mark.filterwarnings('ignore:The max_iter was reached')
def test_sag_regressor():
"""tests if the sag regressor performs well"""
xmin, xmax = -5, 5
n_samples = 20
tol = .001
max_iter = 20
alpha = 0.1
rng = np.random.RandomState(0)
X = np.linspace(xmin, xmax, n_samples).reshape(n_samples, 1)
# simple linear function without noise
y = 0.5 * X.ravel()
clf1 = Ridge(tol=tol, solver='sag', max_iter=max_iter,
alpha=alpha * n_samples)
clf2 = clone(clf1)
clf1.fit(X, y)
clf2.fit(sp.csr_matrix(X), y)
score1 = clf1.score(X, y)
score2 = clf2.score(X, y)
assert_greater(score1, 0.99)
> assert_greater(score2, 0.99)
c:\python37\lib\site-packages\sklearn\linear_model\tests\test_sag.py:477:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
c:\python37\lib\unittest\case.py:1238: in assertGreater
self.fail(self._formatMessage(msg, standardMsg))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <sklearn.utils._unittest_backport.TestCase testMethod=__init__>
msg = '0.9774630692168063 not greater than 0.99'
def fail(self, msg=None):
"""Fail immediately, with the given message."""
> raise self.failureException(msg)
E AssertionError: 0.9774630692168063 not greater than 0.99
It might not be related to 32 bit windows at all. It could just be bad luck because we did not seed the random_state on Ridge, sag is a stochastic solver, max_iter is smallish (20 iters) and we ignore convergence warnings in that test.
If the runtime is not too long I would rather like increase max_iter, seed the Ridge model with a fixed integer seed to be reused for both fits and remove the @pytest.mark.filterwarnings('ignore:The max_iter was reached').
One of the nightly builds has failed with:
_____________________________ test_sag_regressor ______________________________ @pytest.mark.filterwarnings('ignore:The max_iter was reached') def test_sag_regressor(): """tests if the sag regressor performs well""" xmin, xmax = -5, 5 n_samples = 20 tol = .001 max_iter = 20 alpha = 0.1 rng = np.random.RandomState(0) X = np.linspace(xmin, xmax, n_samples).reshape(n_samples, 1) # simple linear function without noise y = 0.5 * X.ravel() clf1 = Ridge(tol=tol, solver='sag', max_iter=max_iter, alpha=alpha * n_samples) clf2 = clone(clf1) clf1.fit(X, y) clf2.fit(sp.csr_matrix(X), y) score1 = clf1.score(X, y) score2 = clf2.score(X, y) assert_greater(score1, 0.99) > assert_greater(score2, 0.99) c:\python37\lib\site-packages\sklearn\linear_model\tests\test_sag.py:477: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ c:\python37\lib\unittest\case.py:1238: in assertGreater self.fail(self._formatMessage(msg, standardMsg)) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <sklearn.utils._unittest_backport.TestCase testMethod=__init__> msg = '0.9774630692168063 not greater than 0.99' def fail(self, msg=None): """Fail immediately, with the given message.""" > raise self.failureException(msg) E AssertionError: 0.9774630692168063 not greater than 0.99It might not be related to 32 bit windows at all. It could just be bad luck because we did not seed the
random_stateonRidge,sagis a stochastic solver,max_iteris smallish (20 iters) and we ignore convergence warnings in that test.If the runtime is not too long I would rather like increase
max_iter, seed theRidgemodel with a fixed integer seed to be reused for both fits and remove the@pytest.mark.filterwarnings('ignore:The max_iter was reached').