Migrating to Zarrita v0.7.0
This release contains backwards-incompatible changes. To avoid automatically picking them up, pin an exact version of zarrita in your package.json (recommended), or use a patch-only range like ~0.7.0.
v0.7 has one hard break (zarr.create options are now camelCase) and one deprecation (FetchStore's overrides option, in favor of the new fetch handler). Everything else is additive.
TL;DR
- Breaking:
zarr.createoptions are now camelCase. - Deprecated:
FetchStore'soverridesoption; use the newfetchoption instead. - New: custom
fetch,AbortSignalcancellation, named-dimension selection (select), composable store and array extensions (defineStoreExtension,defineArrayExtension), v3 consolidated metadata, range coalescing and byte caching, and more. - Fixes: NaN/Inf fill values, scalar
get/set, boolean narrowing, browser autodetect.
What's new
Custom fetch on FetchStore
FetchStore now accepts a WinterTC-style fetch handler (Request in, Response out). This is the recommended way to cover the long tail of things you might need at the fetch level: auth, presigning, header injection, response remapping, caching, retries. Because it is a real function you configure once on the store, every outgoing request picks it up and the rest of your code does not have to know about it.
Attaching an auth token:
const store = new FetchStore("https://example.com/data.zarr", {
async fetch(request) {
const token = await getAccessToken();
request.headers.set("Authorization", `Bearer ${token}`);
return fetch(request);
},
});Presigning a URL (for example, against an S3 bucket that requires signed URLs):
const store = new FetchStore("https://my-bucket.s3.amazonaws.com/data.zarr", {
async fetch(request) {
const signedUrl = await presign(request.url);
return fetch(new Request(signedUrl, request));
},
});Remapping response status codes (useful when a backend returns 403 for missing chunks where zarrita expects 404):
const store = new FetchStore("https://my-bucket.s3.amazonaws.com/data.zarr", {
async fetch(request) {
const response = await fetch(request);
if (response.status === 403) {
return new Response(null, { status: 404 });
}
return response;
},
});See the deprecation section below for the full migration story from overrides.
Cancellation with AbortSignal
open, get, and set now accept an AbortSignal. Signals are forwarded to the underlying store and checked between async steps so in-flight work can be cancelled cleanly.
const controller = new AbortController();
await zarr.get(arr, [null], { signal: controller.signal });Named-dimension selection
Array now exposes a dimensionNames getter (v3 metadata, or _ARRAY_DIMENSIONS on v2), and the new select helper converts a record of dimension names into a positional selection array:
// arr.dimensionNames -> ["time", "lat", "lon"]
let selection = zarr.select(arr, { lat: zarr.slice(100, 200), time: 0 });
let result = await zarr.get(arr, selection);Composable store and array extensions
v0.7 introduces two composable extension points, one per layer, for layering behavior on top of an existing store or array. Both are built from the same factory-plus-Proxy primitive:
| Layer | Intercepts | Primitive | Composer |
|---|---|---|---|
| Transport | store.get(key, range) | zarr.defineStoreExtension | zarr.extendStore |
| Data | array.getChunk(coords) | zarr.defineArrayExtension | zarr.extendArray |
Store extensions handle transport (paths and bytes); array extensions handle data (chunk coordinates). The factory receives the inner value and user options, and returns method overrides and any new fields to expose. Anything not returned is delegated to the inner value through a Proxy, so private instance state keeps working through the wrapper.
If you had a custom AsyncReadable subclass for caching, consolidation, or request batching, v0.7 replaces it. Here's a small store extension that caches bytes, followed by a small array extension that caches decoded chunks:
const withByteCache = zarr.defineStoreExtension(
(store, opts: { maxSize?: number } = {}) => {
let cache = new Map<zarr.AbsolutePath, Uint8Array>();
return {
async get(key, options) {
let hit = cache.get(key);
if (hit) return hit;
let bytes = await store.get(key, options);
if (bytes) cache.set(key, bytes);
return bytes;
},
clear() { cache.clear(); },
};
},
);
const withChunkCache = zarr.defineArrayExtension(
(array, opts: { cache: Map<string, zarr.Chunk<zarr.DataType>> }) => ({
async getChunk(coords, options) {
let key = coords.join(",");
let hit = opts.cache.get(key);
if (hit) return hit;
let chunk = await array.getChunk(coords, options);
opts.cache.set(key, chunk);
return chunk;
},
}),
);Compose extensions in a pipeline with extendStore / extendArray. Each step wraps the previous one, and any async factory (like withConsolidatedMetadata, which fetches metadata during initialization) is handled automatically:
let store = await zarr.extendStore(
new zarr.FetchStore("https://example.com/data.zarr"),
zarr.withConsolidatedMetadata,
(s) => zarr.withRangeCoalescing(s, { coalesceSize: 32_768 }),
(s) => zarr.withByteCaching(s),
);The three store extensions that ship with v0.7 (zarr.withConsolidatedMetadata, zarr.withRangeCoalescing, and zarr.withByteCaching) are all built on defineStoreExtension. They're the first concrete store extensions, not one-offs. The full API reference lives in the store extensions docs.
Auto-applying array extensions from a store
A store extension can also declare an arrayExtensions field on its factory result. zarr.open reads that list from the composed store and wraps every zarr.Array it returns with those extensions, so downstream consumers don't need to call zarr.extendArray at each call site.
This is the enabling primitive for virtual-format adapters: projects like hdf5-as-virtual-zarr, tiff-as-virtual-zarr, and parquet-as-virtual-zarr that need to synthesize metadata at the transport layer and supply decoded chunks at the data layer from a single factory with shared closure state:
const hdf5VirtualZarr = zarr.defineStoreExtension(
(inner, opts: { root: string }) => {
let parsed = parseHdf5(opts.root); // shared between get and getChunk
return {
async get(key, options) {
if (isVirtualMetadataKey(key, parsed)) {
return synthesizeJson(key, parsed);
}
return inner.get(key, options);
},
arrayExtensions: [
zarr.defineArrayExtension((_inner) => ({
async getChunk(coords) { return parsed.readChunk(coords); },
})),
],
};
},
);
let store = await zarr.extendStore(raw, (s) =>
hdf5VirtualZarr(s, { root: "/my_image" }),
);
// Downstream code doesn't know the adapter exists. It opens and reads.
let arr = await zarr.open(store, { kind: "array", path: "/my_image" });
await zarr.get(arr, [null, zarr.slice(0, 10)]);When store extensions are stacked, each layer's arrayExtensions are merged inner-first, outer-last, symmetric with how the store extensions themselves compose. Groups don't need special handling: nested zarr.open(group.resolve("child")) still picks up the wrapping, because the store reference flows through and each Array-producing open call reads the list on its own.
v3 consolidated metadata (experimental)
zarr.withConsolidatedMetadata (renamed from withConsolidated) is a store extension (see above) that now supports Zarr v3, reading consolidated_metadata from the root zarr.json to match zarr-python. A format option controls which format(s) to try, accepting a single string or an array for fallback ordering:
await zarr.withConsolidatedMetadata(store); // auto-detect
await zarr.withConsolidatedMetadata(store, { format: ["v3", "v2"] }); // v3, fall back to v2The previous tryWithConsolidated helper (which no-ops if consolidated metadata is absent) is now zarr.withMaybeConsolidatedMetadata.
Note: v3 consolidated metadata is not yet part of the official Zarr v3 spec and should be considered experimental.
Range coalescing and byte caching
What used to be a single withRangeBatching extension is now two composable pieces: zarr.withRangeCoalescing merges concurrent range reads into fewer HTTP requests, and zarr.withByteCaching caches the results. They stand alone or stack together, and either can be combined with other store extensions via extendStore:
let store = await zarr.extendStore(
new zarr.FetchStore("https://example.com/data.zarr"),
(s) => zarr.withRangeCoalescing(s, { coalesceSize: 32_768 }),
(s) => zarr.withByteCaching(s),
);withRangeCoalescing groups concurrent getRange() calls on the same path within a microtask, merging adjacent ranges that fall within coalesceSize bytes of each other, and issues one fetch per group. An optional onFlush callback reports per-flush statistics (group count, request count, bytes fetched) for observability.
withByteCaching caches both get() and getRange() responses, and is policy-agnostic: pass your own ByteCache-compatible container (e.g. an LRU) to control eviction, or a keyFor function to narrow caching to a subset of paths. See the store extensions docs for details.
Also new
fillValuegetter onArray, with proper handling ofNaN/Infinity/-Infinityacross v2 and v3.dimensionNamesincreate, and exposed on the array itself.numcodecs.*namespace for the v2 codec registry, matching zarr-python's convention. Built-innumcodecs.shuffleandnumcodecs.deltaship out of the box (pure JS, no WASM). Custom v2 codecs register under the same prefix.bigintinslice()for addressing large dimensions.attrsoption in top-levelopen()to skip.zattrsloading for v2 stores.- Source maps to TypeScript (
declarationMap) so "go to definition" resolves to.tssource instead of.d.ts.
Breaking change
zarr.create options are camelCase
The rest of zarrita's public API is camelCase (FetchStore, withConsolidatedMetadata, dimensionNames, and so on), but zarr.create had historically taken its options in snake_case because the option names were passed through more or less directly to the on-disk Zarr metadata, which itself is snake_case. This meant that any code that both read and wrote arrays ended up mixing cases, and anyone new to the library had to remember which side of the library they were on before they knew what to type.
With this release, zarr.create options are camelCase, and data_type is now called dtype to match the name zarrita uses everywhere else. Internally, zarrita still writes out the correct snake_case field names into the stored metadata, so there is no change to the on-disk format. This only affects the TypeScript/JavaScript API.
await zarr.create(store, {
- data_type: "float32",
+ dtype: "float32",
shape: [100, 100],
- chunk_shape: [10, 10],
+ chunkShape: [10, 10],
- chunk_separator: "/",
+ chunkSeparator: "/",
- fill_value: 0,
+ fillValue: 0,
- dimension_names: ["y", "x"],
+ dimensionNames: ["y", "x"],
});TypeScript will flag every callsite that needs updating. If you are calling zarr.create from untyped JavaScript, note that the old snake_case fields are silently ignored rather than rejected, so a missed rename will write an incomplete zarr.json to disk. We recommend running your write paths once against a throwaway store after upgrading to catch any stragglers.
Deprecation
FetchStore's overrides option is deprecated in favor of fetch
overrides took a static RequestInit merged into every request. Anything computed per request (auth tokens that need refreshing, per-URL signing) had to leak out of the store and into each zarr.get call site:
// Before: auth logic threaded through every get
const store = new FetchStore("https://example.com/data.zarr");
const arr = await zarr.open(store);
let chunk = await zarr.get(arr, null, {
opts: { headers: { Authorization: `Bearer ${await getAccessToken()}` } },
});v0.7 adds a fetch option that takes a WinterTC-style handler (Request in, Promise<Response> out). Because it's a real function configured once on the store, call sites don't need to know about it:
// After: call sites don't need to know about auth
const store = new FetchStore("https://example.com/data.zarr", {
async fetch(request) {
const token = await getAccessToken();
request.headers.set("Authorization", `Bearer ${token}`);
return fetch(request);
},
});
const arr = await zarr.open(store);
let chunk = await zarr.get(arr);overrides still works in v0.7.x and will be removed in a future major release. If all you were doing with it was setting a static header, the migration is one line:
// Before
new FetchStore(url, { overrides: { headers: { "X-Api-Key": key } } });
// After
new FetchStore(url, {
fetch(request) {
request.headers.set("X-Api-Key", key);
return fetch(request);
},
});Bug fixes
- Fill values:
NaN,Infinity, and-Infinitynow round-trip correctly per the Zarr v3 spec. - Scalar arrays:
getandsetnow work forshape=[]. - Version autodetect:
zarr.openno longer fails in browsers when servers return non-JSON responses for v2 metadata keys. NarrowDataTypecorrectly narrows the"boolean"query toBool.
Under the hood
unzipitupgraded from 1.4.3 to 2.0.0.