Skip to content

Fix upstream CI tests#6896

Merged
jsignell merged 13 commits intodask:masterfrom
jsignell:upstream
Feb 4, 2021
Merged

Fix upstream CI tests#6896
jsignell merged 13 commits intodask:masterfrom
jsignell:upstream

Conversation

@jsignell
Copy link
Member

@jsignell jsignell commented Nov 25, 2020

  • Tests added / passed
  • Passes black dask / flake8 dask

Closes: #6148, closes: #7051

@jsignell
Copy link
Member Author

jsignell commented Nov 25, 2020

There are 3 failures when running tests with just the packages from environment-3.8-dev.yaml:

Details

============================================== ERRORS ==============================================
____________________________ ERROR at teardown of test_parquet[pyarrow] ____________________________
[gw1] linux -- Python 3.8.6 /home/julia/conda/envs/dask-upstream/bin/python

self = <s3fs.core.S3FileSystem object at 0x7f66e03df340>, path = 'test'

    def rmdir(self, path):
        path = self._strip_protocol(path).rstrip('/')
        if not self._parent(path):
            try:
>               self.s3.delete_bucket(Bucket=path)

../conda/envs/dask-upstream/lib/python3.8/site-packages/s3fs/core.py:440: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <botocore.client.S3 object at 0x7f66c9051280>, args = (), kwargs = {'Bucket': 'test'}

    def _api_call(self, *args, **kwargs):
        # We're accepting *args so that we can give a more helpful
        # error message than TypeError: _api_call takes exactly
        # 1 argument.
        if args:
            raise TypeError(
                "%s() only accepts keyword arguments." % py_operation_name)
        # The "self" in this scope is referring to the BaseClient.
>       return self._make_api_call(operation_name, kwargs)

../conda/envs/dask-upstream/lib/python3.8/site-packages/botocore/client.py:357: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <botocore.client.S3 object at 0x7f66c9051280>, operation_name = 'DeleteBucket'
api_params = {'Bucket': 'test'}

    def _make_api_call(self, operation_name, api_params):
        operation_model = self._service_model.operation_model(operation_name)
        service_name = self._service_model.service_name
        history_recorder.record('API_CALL', {
            'service': service_name,
            'operation': operation_name,
            'params': api_params,
        })
        if operation_model.deprecated:
            logger.debug('Warning: %s.%s() is deprecated',
                         service_name, operation_name)
        request_context = {
            'client_region': self.meta.region_name,
            'client_config': self.meta.config,
            'has_streaming_input': operation_model.has_streaming_input,
            'auth_type': operation_model.auth_type,
        }
        request_dict = self._convert_to_request_dict(
            api_params, operation_model, context=request_context)
    
        service_id = self._service_model.service_id.hyphenize()
        handler, event_response = self.meta.events.emit_until_response(
            'before-call.{service_id}.{operation_name}'.format(
                service_id=service_id,
                operation_name=operation_name),
            model=operation_model, params=request_dict,
            request_signer=self._request_signer, context=request_context)
    
        if event_response is not None:
            http, parsed_response = event_response
        else:
            http, parsed_response = self._make_request(
                operation_model, request_dict, request_context)
    
        self.meta.events.emit(
            'after-call.{service_id}.{operation_name}'.format(
                service_id=service_id,
                operation_name=operation_name),
            http_response=http, parsed=parsed_response,
            model=operation_model, context=request_context
        )
    
        if http.status_code >= 300:
            error_code = parsed_response.get("Error", {}).get("Code")
            error_class = self.exceptions.from_code(error_code)
>           raise error_class(parsed_response, operation_name)
E           botocore.exceptions.ClientError: An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The bucket you tried to delete is not empty

../conda/envs/dask-upstream/lib/python3.8/site-packages/botocore/client.py:676: ClientError

During handling of the above exception, another exception occurred:

s3_base = None

    @pytest.fixture
    def s3(s3_base):
        with s3_context() as fs:
>           yield fs

dask/bytes/tests/test_s3.py:119: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../conda/envs/dask-upstream/lib/python3.8/contextlib.py:120: in __exit__
    next(self.gen)
dask/bytes/tests/test_s3.py:136: in s3_context
    fs.rm(bucket, recursive=True)
../conda/envs/dask-upstream/lib/python3.8/site-packages/s3fs/core.py:997: in rm
    self.rmdir(bucket)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <s3fs.core.S3FileSystem object at 0x7f66e03df340>, path = 'test'

    def rmdir(self, path):
        path = self._strip_protocol(path).rstrip('/')
        if not self._parent(path):
            try:
                self.s3.delete_bucket(Bucket=path)
            except ClientError as e:
>               raise translate_boto_error(e)
E               OSError: [Errno 39] The bucket you tried to delete is not empty

../conda/envs/dask-upstream/lib/python3.8/site-packages/s3fs/core.py:442: OSError
____________________ ERROR at setup of test_read_bytes_blocksize_on_large_data _____________________
[gw7] linux -- Python 3.8.6 /home/julia/conda/envs/dask-upstream/bin/python

s3 = <s3fs.core.S3FileSystem object at 0x7fa94da38fa0>

    @pytest.fixture()
    @pytest.mark.slow
    def s3_with_yellow_tripdata(s3):
        """
        Fixture with sample yellowtrip CSVs loaded into S3.
    
        Provides the following CSVs:
    
        * s3://test/nyc-taxi/2015/yellow_tripdata_2015-01.csv
        * s3://test/nyc-taxi/2014/yellow_tripdata_2015-mm.csv
          for mm from 01 - 12.
        """
        np = pytest.importorskip("numpy")
        pd = pytest.importorskip("pandas")
    
        data = {
            "VendorID": {0: 2, 1: 1, 2: 1, 3: 1, 4: 1},
            "tpep_pickup_datetime": {
                0: "2015-01-15 19:05:39",
                1: "2015-01-10 20:33:38",
                2: "2015-01-10 20:33:38",
                3: "2015-01-10 20:33:39",
                4: "2015-01-10 20:33:39",
            },
            "tpep_dropoff_datetime": {
                0: "2015-01-15 19:23:42",
                1: "2015-01-10 20:53:28",
                2: "2015-01-10 20:43:41",
                3: "2015-01-10 20:35:31",
                4: "2015-01-10 20:52:58",
            },
            "passenger_count": {0: 1, 1: 1, 2: 1, 3: 1, 4: 1},
            "trip_distance": {0: 1.59, 1: 3.3, 2: 1.8, 3: 0.5, 4: 3.0},
            "pickup_longitude": {
                0: -73.993896484375,
                1: -74.00164794921875,
                2: -73.96334075927734,
                3: -74.00908660888672,
                4: -73.97117614746094,
            },
            "pickup_latitude": {
                0: 40.7501106262207,
                1: 40.7242431640625,
                2: 40.80278778076172,
                3: 40.71381759643555,
                4: 40.762428283691406,
            },
            "RateCodeID": {0: 1, 1: 1, 2: 1, 3: 1, 4: 1},
            "store_and_fwd_flag": {0: "N", 1: "N", 2: "N", 3: "N", 4: "N"},
            "dropoff_longitude": {
                0: -73.97478485107422,
                1: -73.99441528320312,
                2: -73.95182037353516,
                3: -74.00432586669923,
                4: -74.00418090820312,
            },
            "dropoff_latitude": {
                0: 40.75061798095703,
                1: 40.75910949707031,
                2: 40.82441329956055,
                3: 40.71998596191406,
                4: 40.742652893066406,
            },
            "payment_type": {0: 1, 1: 1, 2: 2, 3: 2, 4: 2},
            "fare_amount": {0: 12.0, 1: 14.5, 2: 9.5, 3: 3.5, 4: 15.0},
            "extra": {0: 1.0, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0.5},
            "mta_tax": {0: 0.5, 1: 0.5, 2: 0.5, 3: 0.5, 4: 0.5},
            "tip_amount": {0: 3.25, 1: 2.0, 2: 0.0, 3: 0.0, 4: 0.0},
            "tolls_amount": {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0},
            "improvement_surcharge": {0: 0.3, 1: 0.3, 2: 0.3, 3: 0.3, 4: 0.3},
            "total_amount": {0: 17.05, 1: 17.8, 2: 10.8, 3: 4.8, 4: 16.3},
        }
        sample = pd.DataFrame(data)
        df = sample.take(np.arange(5).repeat(10000))
        file = io.BytesIO()
        sfile = io.TextIOWrapper(file)
        df.to_csv(sfile, index=False)
    
        key = "nyc-taxi/2015/yellow_tripdata_2015-01.csv"
        client = boto3.client("s3", endpoint_url="http://127.0.0.1:5555/")
        client.put_object(Bucket=test_bucket_name, Key=key, Body=file)
        key = "nyc-taxi/2014/yellow_tripdata_2014-{:0>2d}.csv"
    
        for i in range(1, 13):
            file.seek(0)
>           client.put_object(Bucket=test_bucket_name, Key=key.format(i), Body=file)

dask/bytes/tests/test_s3.py:224: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../conda/envs/dask-upstream/lib/python3.8/site-packages/botocore/client.py:357: in _api_call
    return self._make_api_call(operation_name, kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <botocore.client.S3 object at 0x7fa94d601160>, operation_name = 'PutObject'
api_params = {'Body': <_io.BytesIO object at 0x7fa94dae93b0>, 'Bucket': 'test', 'Key': 'nyc-taxi/2014/yellow_tripdata_2014-04.csv'}

    def _make_api_call(self, operation_name, api_params):
        operation_model = self._service_model.operation_model(operation_name)
        service_name = self._service_model.service_name
        history_recorder.record('API_CALL', {
            'service': service_name,
            'operation': operation_name,
            'params': api_params,
        })
        if operation_model.deprecated:
            logger.debug('Warning: %s.%s() is deprecated',
                         service_name, operation_name)
        request_context = {
            'client_region': self.meta.region_name,
            'client_config': self.meta.config,
            'has_streaming_input': operation_model.has_streaming_input,
            'auth_type': operation_model.auth_type,
        }
        request_dict = self._convert_to_request_dict(
            api_params, operation_model, context=request_context)
    
        service_id = self._service_model.service_id.hyphenize()
        handler, event_response = self.meta.events.emit_until_response(
            'before-call.{service_id}.{operation_name}'.format(
                service_id=service_id,
                operation_name=operation_name),
            model=operation_model, params=request_dict,
            request_signer=self._request_signer, context=request_context)
    
        if event_response is not None:
            http, parsed_response = event_response
        else:
            http, parsed_response = self._make_request(
                operation_model, request_dict, request_context)
    
        self.meta.events.emit(
            'after-call.{service_id}.{operation_name}'.format(
                service_id=service_id,
                operation_name=operation_name),
            http_response=http, parsed=parsed_response,
            model=operation_model, context=request_context
        )
    
        if http.status_code >= 300:
            error_code = parsed_response.get("Error", {}).get("Code")
            error_class = self.exceptions.from_code(error_code)
>           raise error_class(parsed_response, operation_name)
E           botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the PutObject operation: The specified bucket does not exist

../conda/envs/dask-upstream/lib/python3.8/site-packages/botocore/client.py:676: NoSuchBucket
___________________ ERROR at teardown of test_read_bytes_blocksize_on_large_data ___________________
[gw7] linux -- Python 3.8.6 /home/julia/conda/envs/dask-upstream/bin/python

self = <s3fs.core.S3FileSystem object at 0x7fa94da38fa0>, path = 'test'

    def rmdir(self, path):
        path = self._strip_protocol(path).rstrip('/')
        if not self._parent(path):
            try:
>               self.s3.delete_bucket(Bucket=path)

../conda/envs/dask-upstream/lib/python3.8/site-packages/s3fs/core.py:440: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <botocore.client.S3 object at 0x7fa94d6431c0>, args = (), kwargs = {'Bucket': 'test'}

    def _api_call(self, *args, **kwargs):
        # We're accepting *args so that we can give a more helpful
        # error message than TypeError: _api_call takes exactly
        # 1 argument.
        if args:
            raise TypeError(
                "%s() only accepts keyword arguments." % py_operation_name)
        # The "self" in this scope is referring to the BaseClient.
>       return self._make_api_call(operation_name, kwargs)

../conda/envs/dask-upstream/lib/python3.8/site-packages/botocore/client.py:357: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <botocore.client.S3 object at 0x7fa94d6431c0>, operation_name = 'DeleteBucket'
api_params = {'Bucket': 'test'}

    def _make_api_call(self, operation_name, api_params):
        operation_model = self._service_model.operation_model(operation_name)
        service_name = self._service_model.service_name
        history_recorder.record('API_CALL', {
            'service': service_name,
            'operation': operation_name,
            'params': api_params,
        })
        if operation_model.deprecated:
            logger.debug('Warning: %s.%s() is deprecated',
                         service_name, operation_name)
        request_context = {
            'client_region': self.meta.region_name,
            'client_config': self.meta.config,
            'has_streaming_input': operation_model.has_streaming_input,
            'auth_type': operation_model.auth_type,
        }
        request_dict = self._convert_to_request_dict(
            api_params, operation_model, context=request_context)
    
        service_id = self._service_model.service_id.hyphenize()
        handler, event_response = self.meta.events.emit_until_response(
            'before-call.{service_id}.{operation_name}'.format(
                service_id=service_id,
                operation_name=operation_name),
            model=operation_model, params=request_dict,
            request_signer=self._request_signer, context=request_context)
    
        if event_response is not None:
            http, parsed_response = event_response
        else:
            http, parsed_response = self._make_request(
                operation_model, request_dict, request_context)
    
        self.meta.events.emit(
            'after-call.{service_id}.{operation_name}'.format(
                service_id=service_id,
                operation_name=operation_name),
            http_response=http, parsed=parsed_response,
            model=operation_model, context=request_context
        )
    
        if http.status_code >= 300:
            error_code = parsed_response.get("Error", {}).get("Code")
            error_class = self.exceptions.from_code(error_code)
>           raise error_class(parsed_response, operation_name)
E           botocore.errorfactory.NoSuchBucket: An error occurred (NoSuchBucket) when calling the DeleteBucket operation: The specified bucket does not exist

../conda/envs/dask-upstream/lib/python3.8/site-packages/botocore/client.py:676: NoSuchBucket

During handling of the above exception, another exception occurred:

s3_base = None

    @pytest.fixture
    def s3(s3_base):
        with s3_context() as fs:
>           yield fs

dask/bytes/tests/test_s3.py:119: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../conda/envs/dask-upstream/lib/python3.8/contextlib.py:120: in __exit__
    next(self.gen)
dask/bytes/tests/test_s3.py:136: in s3_context
    fs.rm(bucket, recursive=True)
../conda/envs/dask-upstream/lib/python3.8/site-packages/s3fs/core.py:997: in rm
    self.rmdir(bucket)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <s3fs.core.S3FileSystem object at 0x7fa94da38fa0>, path = 'test'

    def rmdir(self, path):
        path = self._strip_protocol(path).rstrip('/')
        if not self._parent(path):
            try:
                self.s3.delete_bucket(Bucket=path)
            except ClientError as e:
>               raise translate_boto_error(e)
E               FileNotFoundError: The specified bucket does not exist

../conda/envs/dask-upstream/lib/python3.8/site-packages/s3fs/core.py:442: FileNotFoundError
============================================= FAILURES =============================================
_____________________ test_partitioned_preserve_index[fastparquet-fastparquet] _____________________
[gw4] linux -- Python 3.8.6 /home/julia/conda/envs/dask-upstream/bin/python

tmpdir = local('/tmp/pytest-of-julia/pytest-27/popen-gw4/test_partitioned_preserve_inde0')
write_engine = 'fastparquet', read_engine = 'fastparquet'

    @write_read_engines_xfail
    def test_partitioned_preserve_index(tmpdir, write_engine, read_engine):
    
        if write_engine == "pyarrow" and pa.__version__ < LooseVersion("0.15.0"):
            pytest.skip("PyArrow>=0.15 Required.")
    
        tmp = str(tmpdir)
        size = 1_000
        npartitions = 4
        b = np.arange(npartitions).repeat(size // npartitions)
        data = pd.DataFrame(
            {
                "myindex": np.arange(size),
                "A": np.random.random(size=size),
                "B": pd.Categorical(b),
            }
        ).set_index("myindex")
        data.index.name = None
        df1 = dd.from_pandas(data, npartitions=npartitions)
        df1.to_parquet(tmp, partition_on="B", engine=write_engine)
    
        expect = data[data["B"] == 1]
        got = dd.read_parquet(tmp, engine=read_engine, filters=[("B", "==", 1)])
>       assert_eq(expect, got)

dask/dataframe/io/tests/test_parquet.py:2788: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

a =             A  B
250  0.836370  1
251  0.029695  1
252  0.184864  1
253  0.639815  1
254  0.177313  1
..        ... ..
495  0.048179  1
496  0.719764  1
497  0.802599  1
498  0.798046  1
499  0.897758  1

[250 rows x 2 columns]
b = Empty DataFrame
Columns: [A, B]
Index: [], check_names = True, check_dtypes = True
check_divisions = True, check_index = True, kwargs = {}

    def assert_eq(
        a,
        b,
        check_names=True,
        check_dtypes=True,
        check_divisions=True,
        check_index=True,
        **kwargs,
    ):
        if check_divisions:
            assert_divisions(a)
            assert_divisions(b)
            if hasattr(a, "divisions") and hasattr(b, "divisions"):
                at = type(np.asarray(a.divisions).tolist()[0])  # numpy to python
                bt = type(np.asarray(b.divisions).tolist()[0])  # scalar conversion
                assert at == bt, (at, bt)
        assert_sane_keynames(a)
        assert_sane_keynames(b)
        a = _check_dask(a, check_names=check_names, check_dtypes=check_dtypes)
        b = _check_dask(b, check_names=check_names, check_dtypes=check_dtypes)
        if not check_index:
            a = a.reset_index(drop=True)
            b = b.reset_index(drop=True)
        if hasattr(a, "to_pandas"):
            a = a.to_pandas()
        if hasattr(b, "to_pandas"):
            b = b.to_pandas()
        if isinstance(a, pd.DataFrame):
            a = _maybe_sort(a)
            b = _maybe_sort(b)
>           tm.assert_frame_equal(a, b, **kwargs)
E           AssertionError: DataFrame are different
E           
E           DataFrame shape mismatch
E           [left]:  (250, 2)
E           [right]: (0, 2)

dask/dataframe/utils.py:828: AssertionError
_______________________ test_partitioned_preserve_index[fastparquet-pyarrow] _______________________
[gw4] linux -- Python 3.8.6 /home/julia/conda/envs/dask-upstream/bin/python

tmpdir = local('/tmp/pytest-of-julia/pytest-27/popen-gw4/test_partitioned_preserve_inde1')
write_engine = 'fastparquet', read_engine = 'pyarrow'

    @write_read_engines_xfail
    def test_partitioned_preserve_index(tmpdir, write_engine, read_engine):
    
        if write_engine == "pyarrow" and pa.__version__ < LooseVersion("0.15.0"):
            pytest.skip("PyArrow>=0.15 Required.")
    
        tmp = str(tmpdir)
        size = 1_000
        npartitions = 4
        b = np.arange(npartitions).repeat(size // npartitions)
        data = pd.DataFrame(
            {
                "myindex": np.arange(size),
                "A": np.random.random(size=size),
                "B": pd.Categorical(b),
            }
        ).set_index("myindex")
        data.index.name = None
        df1 = dd.from_pandas(data, npartitions=npartitions)
        df1.to_parquet(tmp, partition_on="B", engine=write_engine)
    
        expect = data[data["B"] == 1]
        got = dd.read_parquet(tmp, engine=read_engine, filters=[("B", "==", 1)])
>       assert_eq(expect, got)

dask/dataframe/io/tests/test_parquet.py:2788: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

a =             A  B
250  0.986650  1
251  0.673856  1
252  0.881741  1
253  0.144109  1
254  0.916354  1
..        ... ..
495  0.468871  1
496  0.093040  1
497  0.007673  1
498  0.034938  1
499  0.503552  1

[250 rows x 2 columns]
b = Empty DataFrame
Columns: [A, B]
Index: [], check_names = True, check_dtypes = True
check_divisions = True, check_index = True, kwargs = {}

    def assert_eq(
        a,
        b,
        check_names=True,
        check_dtypes=True,
        check_divisions=True,
        check_index=True,
        **kwargs,
    ):
        if check_divisions:
            assert_divisions(a)
            assert_divisions(b)
            if hasattr(a, "divisions") and hasattr(b, "divisions"):
                at = type(np.asarray(a.divisions).tolist()[0])  # numpy to python
                bt = type(np.asarray(b.divisions).tolist()[0])  # scalar conversion
                assert at == bt, (at, bt)
        assert_sane_keynames(a)
        assert_sane_keynames(b)
        a = _check_dask(a, check_names=check_names, check_dtypes=check_dtypes)
        b = _check_dask(b, check_names=check_names, check_dtypes=check_dtypes)
        if not check_index:
            a = a.reset_index(drop=True)
            b = b.reset_index(drop=True)
        if hasattr(a, "to_pandas"):
            a = a.to_pandas()
        if hasattr(b, "to_pandas"):
            b = b.to_pandas()
        if isinstance(a, pd.DataFrame):
            a = _maybe_sort(a)
            b = _maybe_sort(b)
>           tm.assert_frame_equal(a, b, **kwargs)
E           AssertionError: DataFrame are different
E           
E           DataFrame shape mismatch
E           [left]:  (250, 2)
E           [right]: (0, 2)

dask/dataframe/utils.py:828: AssertionError

@jsignell
Copy link
Member Author

After first commit -> https://github.com/jsignell/dask/runs/1455951088

@jsignell
Copy link
Member Author

Strangely it looks like it used to work to set slices of np.arrays of integer type to np.nan they would become some very large negative integer. Now that is raising an error, so I think we should probably change those tests to use floats.

@jsignell
Copy link
Member Author

The last issues should be with sparse, so hopefully I can fix those up and finally get this thing merged!

@jsignell jsignell marked this pull request as ready for review February 4, 2021 15:42
@jsignell
Copy link
Member Author

jsignell commented Feb 4, 2021

I wrote up the sparse issue in #7169 and I think it should be tracked separately. I am 🤞 thinking that this might actually go green

@jsignell
Copy link
Member Author

jsignell commented Feb 4, 2021

I'm planning to merge this when it's green.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

sparse test_basic's mean fails on the upstream Upstream dev failures

2 participants