Skip to content

Update pre-commit hooks #10208

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 8, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ repos:
- id: text-unicode-replacement-char
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.9.9
rev: v0.11.4
hooks:
- id: ruff-format
- id: ruff
Expand Down Expand Up @@ -69,12 +69,12 @@ repos:
- id: taplo-format
args: ["--option", "array_auto_collapse=false"]
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.23
rev: v0.24.1
hooks:
- id: validate-pyproject
additional_dependencies: ["validate-pyproject-schema-store[all]"]
- repo: https://github.com/crate-ci/typos
rev: dictgen-v0.3.1
rev: v1
hooks:
- id: typos
# https://github.com/crate-ci/typos/issues/347
Expand Down
2 changes: 1 addition & 1 deletion design_notes/flexible_indexes_notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ Besides `pandas.Index`, other indexes currently supported in Xarray like `CFTime

Like for the indexes, explicit coordinate creation should be preferred over implicit coordinate creation. However, there may be some situations where we would like to keep creating coordinates implicitly for backwards compatibility.

For example, it is currently possible to pass a `pandas.MulitIndex` object as a coordinate to the Dataset/DataArray constructor:
For example, it is currently possible to pass a `pandas.MultiIndex` object as a coordinate to the Dataset/DataArray constructor:

```python
>>> midx = pd.MultiIndex.from_arrays([['a', 'b'], [0, 1]], names=['lvl1', 'lvl2'])
Expand Down
2 changes: 1 addition & 1 deletion doc/getting-started-guide/quick-overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ Operations also align based on index labels:

data[:-1] - data[:1]

For more, see :ref:`comput`.
For more, see :ref:`compute`.

GroupBy
-------
Expand Down
10 changes: 5 additions & 5 deletions doc/user-guide/computation.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
.. currentmodule:: xarray

.. _comput:
.. _compute:

###########
Computation
Expand Down Expand Up @@ -236,7 +236,7 @@ These operations automatically skip missing values, like in pandas:
If desired, you can disable this behavior by invoking the aggregation method
with ``skipna=False``.

.. _comput.rolling:
.. _compute.rolling:

Rolling window operations
=========================
Expand Down Expand Up @@ -308,7 +308,7 @@ We can also manually iterate through ``Rolling`` objects:
# arr_window is a view of x
...

.. _comput.rolling_exp:
.. _compute.rolling_exp:

While ``rolling`` provides a simple moving average, ``DataArray`` also supports
an exponential moving average with :py:meth:`~xarray.DataArray.rolling_exp`.
Expand Down Expand Up @@ -354,7 +354,7 @@ You can also use ``construct`` to compute a weighted rolling sum:
To avoid this, use ``skipna=False`` as the above example.


.. _comput.weighted:
.. _compute.weighted:

Weighted array reductions
=========================
Expand Down Expand Up @@ -823,7 +823,7 @@ Arithmetic between two datasets matches data variables of the same name:
Similarly to index based alignment, the result has the intersection of all
matching data variables.

.. _comput.wrapping-custom:
.. _compute.wrapping-custom:

Wrapping custom computation
===========================
Expand Down
2 changes: 1 addition & 1 deletion doc/user-guide/dask.rst
Original file line number Diff line number Diff line change
Expand Up @@ -282,7 +282,7 @@ we use to calculate `Spearman's rank-correlation coefficient <https://en.wikiped
The only aspect of this example that is different from standard usage of
``apply_ufunc()`` is that we needed to supply the ``output_dtypes`` arguments.
(Read up on :ref:`comput.wrapping-custom` for an explanation of the
(Read up on :ref:`compute.wrapping-custom` for an explanation of the
"core dimensions" listed in ``input_core_dims``.)

Our new ``spearman_correlation()`` function achieves near linear speedup
Expand Down
2 changes: 1 addition & 1 deletion doc/user-guide/data-structures.rst
Original file line number Diff line number Diff line change
Expand Up @@ -880,7 +880,7 @@ them into dataset objects:

The merge method is particularly interesting, because it implements the same
logic used for merging coordinates in arithmetic operations
(see :ref:`comput`):
(see :ref:`compute`):

.. ipython:: python

Expand Down
2 changes: 1 addition & 1 deletion doc/user-guide/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ This is particularly useful for ragged indexing of multi-dimensional data,
e.g., to apply a 2D mask to an image. Note that ``where`` follows all the
usual xarray broadcasting and alignment rules for binary operations (e.g.,
``+``) between the object being indexed and the condition, as described in
:ref:`comput`:
:ref:`compute`:

.. ipython:: python

Expand Down
6 changes: 3 additions & 3 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4281,7 +4281,7 @@ New Features
~~~~~~~~~~~~

- Weighted array reductions are now supported via the new :py:meth:`DataArray.weighted`
and :py:meth:`Dataset.weighted` methods. See :ref:`comput.weighted`. (:issue:`422`, :pull:`2922`).
and :py:meth:`Dataset.weighted` methods. See :ref:`compute.weighted`. (:issue:`422`, :pull:`2922`).
By `Mathias Hauser <https://github.com/mathause>`_.
- The new jupyter notebook repr (``Dataset._repr_html_`` and
``DataArray._repr_html_``) (introduced in 0.14.1) is now on by default. To
Expand Down Expand Up @@ -6412,7 +6412,7 @@ Enhancements
- New helper function :py:func:`~xarray.apply_ufunc` for wrapping functions
written to work on NumPy arrays to support labels on xarray objects
(:issue:`770`). ``apply_ufunc`` also support automatic parallelization for
many functions with dask. See :ref:`comput.wrapping-custom` and
many functions with dask. See :ref:`compute.wrapping-custom` and
:ref:`dask.automatic-parallelization` for details.
By `Stephan Hoyer <https://github.com/shoyer>`_.

Expand Down Expand Up @@ -7434,7 +7434,7 @@ Enhancements
* x (x) int64 0 1 2
* y (y) int64 0 1 2 3 4

See :ref:`comput.rolling` for more details. By
See :ref:`compute.rolling` for more details. By
`Joe Hamman <https://github.com/jhamman>`_.

Bug fixes
Expand Down
2 changes: 1 addition & 1 deletion xarray/backends/zarr.py
Original file line number Diff line number Diff line change
Expand Up @@ -1290,7 +1290,7 @@ def _validate_and_autodetect_region(self, ds: Dataset) -> Dataset:
region = self._write_region

if region == "auto":
region = {dim: "auto" for dim in ds.dims}
region = dict.fromkeys(ds.dims, "auto")

if not isinstance(region, dict):
raise TypeError(f"``region`` must be a dict, got {type(region)}")
Expand Down
4 changes: 2 additions & 2 deletions xarray/computation/fit.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,8 +80,8 @@ def _initialize_feasible(lb, ub):
)
return p0

param_defaults = {p: 1 for p in params}
bounds_defaults = {p: (-np.inf, np.inf) for p in params}
param_defaults = dict.fromkeys(params, 1)
bounds_defaults = dict.fromkeys(params, (-np.inf, np.inf))
for p in params:
if p in func_args and func_args[p].default is not func_args[p].empty:
param_defaults[p] = func_args[p].default
Expand Down
2 changes: 1 addition & 1 deletion xarray/computation/rolling.py
Original file line number Diff line number Diff line change
Expand Up @@ -1087,7 +1087,7 @@ def __init__(
if utils.is_dict_like(coord_func):
coord_func_map = coord_func
else:
coord_func_map = {d: coord_func for d in self.obj.dims}
coord_func_map = dict.fromkeys(self.obj.dims, coord_func)
for c in self.obj.coords:
if c not in coord_func_map:
coord_func_map[c] = duck_array_ops.mean # type: ignore[index]
Expand Down
6 changes: 3 additions & 3 deletions xarray/core/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -457,7 +457,7 @@ def squeeze(
numpy.squeeze
"""
dims = get_squeeze_dims(self, dim, axis)
return self.isel(drop=drop, **{d: 0 for d in dims})
return self.isel(drop=drop, **dict.fromkeys(dims, 0))

def clip(
self,
Expand Down Expand Up @@ -1701,11 +1701,11 @@ def full_like(

if isinstance(other, Dataset):
if not isinstance(fill_value, dict):
fill_value = {k: fill_value for k in other.data_vars.keys()}
fill_value = dict.fromkeys(other.data_vars.keys(), fill_value)

dtype_: Mapping[Any, DTypeLikeSave]
if not isinstance(dtype, Mapping):
dtype_ = {k: dtype for k in other.data_vars.keys()}
dtype_ = dict.fromkeys(other.data_vars.keys(), dtype)
else:
dtype_ = dtype

Expand Down
10 changes: 5 additions & 5 deletions xarray/core/coordinates.py
Original file line number Diff line number Diff line change
Expand Up @@ -309,7 +309,7 @@ def __init__(
var = as_variable(data, name=name, auto_convert=False)
if var.dims == (name,) and indexes is None:
index, index_vars = create_default_index_implicit(var, list(coords))
default_indexes.update({k: index for k in index_vars})
default_indexes.update(dict.fromkeys(index_vars, index))
variables.update(index_vars)
else:
variables[name] = var
Expand Down Expand Up @@ -384,7 +384,7 @@ def from_xindex(cls, index: Index) -> Self:
f"create any coordinate.\n{index!r}"
)

indexes = {name: index for name in variables}
indexes = dict.fromkeys(variables, index)

return cls(coords=variables, indexes=indexes)

Expand Down Expand Up @@ -412,7 +412,7 @@ def from_pandas_multiindex(cls, midx: pd.MultiIndex, dim: Hashable) -> Self:
xr_idx = PandasMultiIndex(midx, dim)

variables = xr_idx.create_variables()
indexes = {k: xr_idx for k in variables}
indexes = dict.fromkeys(variables, xr_idx)

return cls(coords=variables, indexes=indexes)

Expand Down Expand Up @@ -1134,7 +1134,7 @@ def create_coords_with_default_indexes(
# pandas multi-index edge cases.
variable = variable.to_index_variable()
idx, idx_vars = create_default_index_implicit(variable, all_variables)
indexes.update({k: idx for k in idx_vars})
indexes.update(dict.fromkeys(idx_vars, idx))
variables.update(idx_vars)
all_variables.update(idx_vars)
else:
Expand All @@ -1159,7 +1159,7 @@ def _coordinates_from_variable(variable: Variable) -> Coordinates:

(name,) = variable.dims
new_index, index_vars = create_default_index_implicit(variable)
indexes = {k: new_index for k in index_vars}
indexes = dict.fromkeys(index_vars, new_index)
new_vars = new_index.create_variables()
new_vars[name].attrs = variable.attrs
return Coordinates(new_vars, indexes)
2 changes: 1 addition & 1 deletion xarray/core/dataarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -7078,7 +7078,7 @@ def weighted(self, weights: DataArray) -> DataArrayWeighted:
--------
:func:`Dataset.weighted <Dataset.weighted>`

:ref:`comput.weighted`
:ref:`compute.weighted`
User guide on weighted array reduction using :py:func:`~xarray.DataArray.weighted`

:doc:`xarray-tutorial:fundamentals/03.4_weighted`
Expand Down
26 changes: 13 additions & 13 deletions xarray/core/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -1122,7 +1122,7 @@ def _copy_listed(self, names: Iterable[Hashable]) -> Self:
coord_names.add(var_name)
if (var_name,) == var.dims:
index, index_vars = create_default_index_implicit(var, names)
indexes.update({k: index for k in index_vars})
indexes.update(dict.fromkeys(index_vars, index))
variables.update(index_vars)
coord_names.update(index_vars)

Expand Down Expand Up @@ -3012,7 +3012,7 @@ def head(
if not isinstance(indexers, int) and not is_dict_like(indexers):
raise TypeError("indexers must be either dict-like or a single integer")
if isinstance(indexers, int):
indexers = {dim: indexers for dim in self.dims}
indexers = dict.fromkeys(self.dims, indexers)
indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "head")
for k, v in indexers.items():
if not isinstance(v, int):
Expand Down Expand Up @@ -3100,7 +3100,7 @@ def tail(
if not isinstance(indexers, int) and not is_dict_like(indexers):
raise TypeError("indexers must be either dict-like or a single integer")
if isinstance(indexers, int):
indexers = {dim: indexers for dim in self.dims}
indexers = dict.fromkeys(self.dims, indexers)
indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "tail")
for k, v in indexers.items():
if not isinstance(v, int):
Expand Down Expand Up @@ -3186,7 +3186,7 @@ def thin(
):
raise TypeError("indexers must be either dict-like or a single integer")
if isinstance(indexers, int):
indexers = {dim: indexers for dim in self.dims}
indexers = dict.fromkeys(self.dims, indexers)
indexers = either_dict_or_kwargs(indexers, indexers_kwargs, "thin")
for k, v in indexers.items():
if not isinstance(v, int):
Expand Down Expand Up @@ -4029,7 +4029,7 @@ def _rename_indexes(
for index, coord_names in self.xindexes.group_by_index():
new_index = index.rename(name_dict, dims_dict)
new_coord_names = [name_dict.get(k, k) for k in coord_names]
indexes.update({k: new_index for k in new_coord_names})
indexes.update(dict.fromkeys(new_coord_names, new_index))
new_index_vars = new_index.create_variables(
{
new: self._variables[old]
Expand Down Expand Up @@ -4315,7 +4315,7 @@ def swap_dims(
variables[current_name] = var
else:
index, index_vars = create_default_index_implicit(var)
indexes.update({name: index for name in index_vars})
indexes.update(dict.fromkeys(index_vars, index))
variables.update(index_vars)
coord_names.update(index_vars)
else:
Expand Down Expand Up @@ -4474,7 +4474,7 @@ def expand_dims(
elif isinstance(dim, Sequence):
if len(dim) != len(set(dim)):
raise ValueError("dims should not contain duplicate values.")
dim = {d: 1 for d in dim}
dim = dict.fromkeys(dim, 1)

dim = either_dict_or_kwargs(dim, dim_kwargs, "expand_dims")
assert isinstance(dim, MutableMapping)
Expand Down Expand Up @@ -4700,7 +4700,7 @@ def set_index(
for n in idx.index.names:
replace_dims[n] = dim

new_indexes.update({k: idx for k in idx_vars})
new_indexes.update(dict.fromkeys(idx_vars, idx))
new_variables.update(idx_vars)

# re-add deindexed coordinates (convert to base variables)
Expand Down Expand Up @@ -4816,7 +4816,7 @@ def drop_or_convert(var_names):
# instead replace it by a new (multi-)index with dropped level(s)
idx = index.keep_levels(keep_level_vars)
idx_vars = idx.create_variables(keep_level_vars)
new_indexes.update({k: idx for k in idx_vars})
new_indexes.update(dict.fromkeys(idx_vars, idx))
new_variables.update(idx_vars)
if not isinstance(idx, PandasMultiIndex):
# multi-index reduced to single index
Expand Down Expand Up @@ -4996,7 +4996,7 @@ def reorder_levels(
level_vars = {k: self._variables[k] for k in order}
idx = index.reorder_levels(level_vars)
idx_vars = idx.create_variables(level_vars)
new_indexes.update({k: idx for k in idx_vars})
new_indexes.update(dict.fromkeys(idx_vars, idx))
new_variables.update(idx_vars)

indexes = {k: v for k, v in self._indexes.items() if k not in new_indexes}
Expand Down Expand Up @@ -5104,7 +5104,7 @@ def _stack_once(
if len(product_vars) == len(dims):
idx = index_cls.stack(product_vars, new_dim)
new_indexes[new_dim] = idx
new_indexes.update({k: idx for k in product_vars})
new_indexes.update(dict.fromkeys(product_vars, idx))
idx_vars = idx.create_variables(product_vars)
# keep consistent multi-index coordinate order
for k in idx_vars:
Expand Down Expand Up @@ -5351,7 +5351,7 @@ def _unstack_full_reindex(
# TODO: we may depreciate implicit re-indexing with a pandas.MultiIndex
xr_full_idx = PandasMultiIndex(full_idx, dim)
indexers = Indexes(
{k: xr_full_idx for k in index_vars},
dict.fromkeys(index_vars, xr_full_idx),
xr_full_idx.create_variables(index_vars),
)
obj = self._reindex(
Expand Down Expand Up @@ -10052,7 +10052,7 @@ def weighted(self, weights: DataArray) -> DatasetWeighted:
--------
:func:`DataArray.weighted <DataArray.weighted>`

:ref:`comput.weighted`
:ref:`compute.weighted`
User guide on weighted array reduction using :py:func:`~xarray.Dataset.weighted`

:doc:`xarray-tutorial:fundamentals/03.4_weighted`
Expand Down
Loading
Loading