Skip to content

Input and output

qim3d.io.Downloader

Provides access to the QIM online data repository and manages file downloads.

This utility allows users to easily fetch, download, and load sample datasets for testing, benchmarking, or educational purposes. It automatically handles local caching to avoid repeated downloads of the same file.

The Downloader acts as an interface to the QIM data repository, organizing files into accessible attributes (e.g., downloader.Cowry_Shell.Cowry_DOWNSAMPLED). It also serves as a general-purpose tool to retrieve files from any given URL via the __call__ method.

Attributes:

Name Type Description
folder_name str or PathLike

Dynamic attributes corresponding to folders in the repository (e.g., Coal, Foam, Shell).

Methods:

Name Description
list_files

Displays a catalog of downloadable files available in the repository.

__call__

Downloads a file from a specific URL.

Syntax for downloading and loading a pre-defined file: qim3d.io.Downloader().{folder_name}.{file_name}(load_file=True)

Overview of available data

Below is a table of the available folders and files on the QIM data repository.

Folder name File name File size
Coal CoalBrikett
CoalBrikett_Zoom
CoalBrikettZoom_DOWNSAMPLED
2.23 GB
3.72 GB
238 MB
Corals Coral_1
Coral_2
Coral2_DOWNSAMPLED
MexCoral
2.26 GB
2.38 GB
162 MB
2.23 GB
Cowry_Shell Cowry_Shell
Cowry_DOWNSAMPLED
1.82 GB
116 MB
Crab HerrmitCrab
OkinawaCrab
2.38 GB
1.86 GB
Deer_Mandible Animal_Mandible
DeerMandible_DOWNSAMPLED
2.79 GB
638 MB
Foam Foam
Foam_DOWNSAMPLED
Foam_2
Foam_2_zoom
3.72 GB
238 MB
3.72 GB
3.72 GB
Hourglass Hourglass
Hourglass_4X_80kV_Air_9s_1_97um
Hourglass_longexp_rerun
3.72 GB
1.83 GB
3.72 GB
Kiwi Kiwi 2.86 GB
Loofah Loofah
Loofah_DOWNSAMPLED
2.23 GB
143 MB
Marine_Gastropods MarineGatropod_1
MarineGastropod1_DOWNSAMPLED
MarineGatropod_2
MarineGastropod2_DOWNSAMPLED
2.23 GB
143 MB
2.60 GB
166 MB
Mussel ClosedMussel1
ClosedMussel1_DOWNSAMPLED
2.23 GB
143 MB
Oak_Branch Oak_branch
OakBranch_DOWNSAMPLED
2.38 GB
152 MB
Okinawa_Forams Okinawa_Foram_1
Okinawa_Foram_2
1.84 GB
1.84 GB
Physalis Physalis
Physalis_DOWNSAMPLED
3.72 GB
238 MB
Raspberry Raspberry2
Raspberry2_DOWNSAMPLED
2.97 GB
190 MB
Rope FibreRope1
FibreRope1_DOWNSAMPLED
1.82 GB
686 MB
Sea_Urchin SeaUrchin
Cordatum_Shell
Cordatum_Spine
2.60 GB
1.85 GB
183 MB
Snail Escargot 2.60 GB
Sponge Sponge 1.11 GB
Example

import qim3d

downloader = qim3d.io.Downloader()

# Browse available files
downloader.list_files()

# Download and load a specific sample
data = downloader.Cowry_Shell.Cowry_DOWNSAMPLED(load_file=True)

qim3d.viz.slicer_orthogonal(data, colormap="magma")
cowry shell

Source code in qim3d/io/_downloader.py
class Downloader:
    """
    Provides access to the QIM online data repository and manages file downloads.

    This utility allows users to easily fetch, download, and load sample datasets for testing, 
    benchmarking, or educational purposes. It automatically handles local caching to avoid 
    repeated downloads of the same file.

    The `Downloader` acts as an interface to the [QIM data repository](https://data.qim.dk/), 
    organizing files into accessible attributes (e.g., `downloader.Cowry_Shell.Cowry_DOWNSAMPLED`). It also 
    serves as a general-purpose tool to retrieve files from any given URL via the `__call__` method.

    Attributes:
        folder_name (str or os.PathLike): Dynamic attributes corresponding to folders in the repository (e.g., `Coal`, `Foam`, `Shell`).

    Methods:
        list_files(): Displays a catalog of downloadable files available in the repository.
        __call__(url, ...): Downloads a file from a specific URL.

    Syntax for downloading and loading a pre-defined file:
    `qim3d.io.Downloader().{folder_name}.{file_name}(load_file=True)`

    ??? info "Overview of available data"
        Below is a table of the available folders and files on the [QIM data repository](https://data.qim.dk/).

        Folder name         | File name                                                                                                          | File size
        ------------------- | ------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------
        `Coal`              | `CoalBrikett` <br> `CoalBrikett_Zoom` <br> `CoalBrikettZoom_DOWNSAMPLED`                                           | 2.23 GB <br> 3.72 GB <br> 238 MB
        `Corals`            | `Coral_1` <br> `Coral_2` <br> `Coral2_DOWNSAMPLED` <br> `MexCoral`                                                 | 2.26 GB <br> 2.38 GB <br> 162 MB <br> 2.23 GB
        `Cowry_Shell`       | `Cowry_Shell` <br> `Cowry_DOWNSAMPLED`                                                                             | 1.82 GB <br> 116 MB
        `Crab`              | `HerrmitCrab` <br> `OkinawaCrab`                                                                                   | 2.38 GB <br> 1.86 GB
        `Deer_Mandible`     | `Animal_Mandible` <br> `DeerMandible_DOWNSAMPLED` <br>                                                             | 2.79 GB <br> 638 MB
        `Foam`              | `Foam` <br> `Foam_DOWNSAMPLED` <br> `Foam_2` <br> `Foam_2_zoom`                                                    | 3.72 GB <br> 238 MB <br> 3.72 GB <br> 3.72 GB
        `Hourglass`         | `Hourglass` <br> `Hourglass_4X_80kV_Air_9s_1_97um` <br> `Hourglass_longexp_rerun`                                  | 3.72 GB <br> 1.83 GB <br> 3.72 GB
        `Kiwi`              | `Kiwi`                                                                                                             | 2.86 GB
        `Loofah`            | `Loofah` <br> `Loofah_DOWNSAMPLED`                                                                                 | 2.23 GB <br> 143 MB
        `Marine_Gastropods` | `MarineGatropod_1` <br> `MarineGastropod1_DOWNSAMPLED` <br> `MarineGatropod_2` <br> `MarineGastropod2_DOWNSAMPLED` | 2.23 GB <br> 143 MB <br> 2.60 GB <br> 166 MB
        `Mussel`            | `ClosedMussel1` <br> `ClosedMussel1_DOWNSAMPLED`                                                                   | 2.23 GB <br> 143 MB
        `Oak_Branch`        | `Oak_branch` <br> `OakBranch_DOWNSAMPLED`                                                                          | 2.38 GB <br> 152 MB
        `Okinawa_Forams`    | `Okinawa_Foram_1` <br> `Okinawa_Foram_2`                                                                           | 1.84 GB <br> 1.84 GB
        `Physalis`          | `Physalis` <br> `Physalis_DOWNSAMPLED`                                                                             | 3.72 GB <br> 238 MB
        `Raspberry`         | `Raspberry2` <br> `Raspberry2_DOWNSAMPLED`                                                                         | 2.97 GB <br> 190 MB
        `Rope`              | `FibreRope1` <br> `FibreRope1_DOWNSAMPLED`                                                                         | 1.82 GB <br> 686 MB
        `Sea_Urchin`        | `SeaUrchin` <br> `Cordatum_Shell` <br> `Cordatum_Spine`                                                            | 2.60 GB <br> 1.85 GB <br> 183 MB
        `Snail`             | `Escargot`                                                                                                         | 2.60 GB
        `Sponge`            | `Sponge`                                                                                                           | 1.11 GB

    Example:
        ```python
        import qim3d

        downloader = qim3d.io.Downloader()

        # Browse available files
        downloader.list_files()

        # Download and load a specific sample
        data = downloader.Cowry_Shell.Cowry_DOWNSAMPLED(load_file=True)

        qim3d.viz.slicer_orthogonal(data, colormap="magma")
        ```
        ![cowry shell](../../assets/screenshots/cowry_shell_slicer.gif)
    """

    def __init__(self):
        folders = _extract_names()
        for folder in folders:
            setattr(self, folder, _Myfolder(folder))

    def __call__(
        self,
        url: str,
        output_dir: str = '.',
        load_file: bool = False,
        virtual_stack: bool = True,
        scale: int = 0,
    ) -> object:
        """
        Downloads a file or dataset from a direct URL.

        This function serves as a general-purpose file retriever, capable of fetching data from 
        external resources or cloud storage. It supports standard image formats (TIFF, HDF5, NIfTI, DICOM) 
        as well as modern chunked formats (Zarr, OME-Zarr).

        For large 3D volumes that may exceed available RAM, this function supports lazy loading 
        via the `virtual_stack` parameter. This allows users to work with metadata and open the 
        file without immediately reading the full pixel data into memory.

        Args:
            url (str):
                The direct URL of the file to download. Supported formats include:
                regular files (TIFF, HDF5, TXRM/TXM/XRM, NIfTI, PIL, VOL/VGI, DICOM) 
                and Zarr/OME-Zarr stores.
            output_dir (str, optional):
                The local directory where the file will be saved. Defaults to the current working directory.
            load_file (bool, optional):
                If `True`, the function will import/read the file into a Python object after downloading. 
                If `False`, it only downloads the file to the disk. Default is `False`.
            virtual_stack (bool, optional):
                If `True`, the file is loaded as a virtual stack (lazy loading) if the format supports it. 
                This is recommended for large datasets to save memory. Default is `True`.
            scale (int, optional):
                Used only for Zarr/OME-Zarr stores when `load_file` is True. Specifies the resolution 
                level (pyramid scale) to load. 0 is full resolution. Default is 0.

        Returns:
            object:
                The downloaded data or file path. The specific return type depends on the parameters:

                * **str**: The local path to the file (if `load_file=False`).
                * **numpy.ndarray**: The full image data loaded into memory (if `load_file=True` and `virtual_stack=False`).
                * **dask.array.Array**: A lazy-loaded virtual stack (if `load_file=True` and `virtual_stack=True`).

        Example:
            ```python
            import qim3d

            downloader = qim3d.io.Downloader()

            # 1. Download a file from a URL without loading it
            path = downloader(
                url="[https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif](https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif)",
                output_dir=".",
                load_file=False
            )

            # 2. Download and load directly into memory (Numpy array)
            data = downloader(
                url="[https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif](https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif)",
                load_file=True,
                virtual_stack=False
            )
            ```
        """

        parsed = urlparse(url)
        fname = os.path.basename(parsed.path.rstrip('/'))
        dest = os.path.join(output_dir, fname)

        # --- Zarr / OME-Zarr store ---
        if fname.endswith(('.zarr', '.ome.zarr')):
            if os.path.exists(dest):
                log.warning(f'Zarr store already downloaded:\n{os.path.abspath(dest)}')
            else:
                log.info(f'Downloading Zarr store {fname}\n{url}')
                download(url, output_dir=output_dir)  # return always None
            if load_file:
                # If virtual stack == True --> dask array --> need to call False in load (we don't want call .compute())
                # If virtual stack == False --> numpy array --> need to call True in load (we want call .compute())
                log.info(
                    f"\nLoading scale={scale} from {fname} as {'numpy array' if not virtual_stack else 'dask array'}"
                )
                return qim3d.io.import_ome_zarr(
                    dest, scale=scale, load=not virtual_stack
                )
            return dest

        # --- Regular single file ---
        if os.path.exists(dest):
            log.warning(f'File already downloaded:\n{os.path.abspath(dest)}')
            if load_file:
                return load(path=dest, virtual_stack=virtual_stack)
            return dest
        else:
            log.info(f'Downloading file {fname}\n{url}')
            try:
                total = _get_file_size(url)
            except (HTTPError, URLError):
                total = None

            os.makedirs(output_dir, exist_ok=True)
            with tqdm(
                total=total, unit='B', unit_scale=True, unit_divisor=1024, ncols=80
            ) as pbar:
                try:
                    urllib.request.urlretrieve(
                        url,
                        dest,
                        reporthook=lambda blocknum, bs, total: _update_progress(
                            pbar, blocknum, bs
                        ),
                    )
                except HTTPError as http_err:
                    msg = f'Failed to download {url!r}: server returned HTTP {http_err.code}'
                    raise FileNotFoundError(msg) from http_err
                except URLError as url_err:
                    msg = f'Failed to reach {url!r}: {url_err.reason}'
                    raise ConnectionError(msg) from url_err

        if load_file:
            log.info(f'\nLoading {fname}')
            return load(path=dest, virtual_stack=virtual_stack)

        return dest

    def list_files(self) -> None:
        """
        Displays a catalog of all available datasets in the QIM repository.

        This method prints a formatted list of folder names, file names, and file sizes to the console log. 
        It is useful for exploring the inventory of biological and material science scans available for 
        download without needing to visit the website.

        The output groups files by their parent folder (e.g., 'Coal', 'Corals', 'Foam').
        """

        url_dl = 'https://archive.compute.dtu.dk/download/public/projects/viscomp_data_repository'

        folders = _extract_names()

        for folder in folders:
            log.info(f'\n{ouf.boxtitle(folder, return_str=True)}')
            files = _extract_names(folder)

            for file in files:
                url = os.path.join(url_dl, folder, file).replace('\\', '/')
                file_size = _get_file_size(url)
                formatted_file = (
                    f"{file[:-len(file.split('.')[-1])-1].replace('%20', '_')}"
                )
                formatted_size = _format_file_size(file_size)
                path_string = f'{folder}.{formatted_file}'

                log.info(f'{path_string:<50}({formatted_size})')

qim3d.io.Downloader.__call__

__call__(
    url,
    output_dir='.',
    load_file=False,
    virtual_stack=True,
    scale=0,
)

Downloads a file or dataset from a direct URL.

This function serves as a general-purpose file retriever, capable of fetching data from external resources or cloud storage. It supports standard image formats (TIFF, HDF5, NIfTI, DICOM) as well as modern chunked formats (Zarr, OME-Zarr).

For large 3D volumes that may exceed available RAM, this function supports lazy loading via the virtual_stack parameter. This allows users to work with metadata and open the file without immediately reading the full pixel data into memory.

Parameters:

Name Type Description Default
url str

The direct URL of the file to download. Supported formats include: regular files (TIFF, HDF5, TXRM/TXM/XRM, NIfTI, PIL, VOL/VGI, DICOM) and Zarr/OME-Zarr stores.

required
output_dir str

The local directory where the file will be saved. Defaults to the current working directory.

'.'
load_file bool

If True, the function will import/read the file into a Python object after downloading. If False, it only downloads the file to the disk. Default is False.

False
virtual_stack bool

If True, the file is loaded as a virtual stack (lazy loading) if the format supports it. This is recommended for large datasets to save memory. Default is True.

True
scale int

Used only for Zarr/OME-Zarr stores when load_file is True. Specifies the resolution level (pyramid scale) to load. 0 is full resolution. Default is 0.

0

Returns:

Name Type Description
object object

The downloaded data or file path. The specific return type depends on the parameters:

  • str: The local path to the file (if load_file=False).
  • numpy.ndarray: The full image data loaded into memory (if load_file=True and virtual_stack=False).
  • dask.array.Array: A lazy-loaded virtual stack (if load_file=True and virtual_stack=True).
Example
import qim3d

downloader = qim3d.io.Downloader()

# 1. Download a file from a URL without loading it
path = downloader(
    url="[https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif](https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif)",
    output_dir=".",
    load_file=False
)

# 2. Download and load directly into memory (Numpy array)
data = downloader(
    url="[https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif](https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif)",
    load_file=True,
    virtual_stack=False
)
Source code in qim3d/io/_downloader.py
def __call__(
    self,
    url: str,
    output_dir: str = '.',
    load_file: bool = False,
    virtual_stack: bool = True,
    scale: int = 0,
) -> object:
    """
    Downloads a file or dataset from a direct URL.

    This function serves as a general-purpose file retriever, capable of fetching data from 
    external resources or cloud storage. It supports standard image formats (TIFF, HDF5, NIfTI, DICOM) 
    as well as modern chunked formats (Zarr, OME-Zarr).

    For large 3D volumes that may exceed available RAM, this function supports lazy loading 
    via the `virtual_stack` parameter. This allows users to work with metadata and open the 
    file without immediately reading the full pixel data into memory.

    Args:
        url (str):
            The direct URL of the file to download. Supported formats include:
            regular files (TIFF, HDF5, TXRM/TXM/XRM, NIfTI, PIL, VOL/VGI, DICOM) 
            and Zarr/OME-Zarr stores.
        output_dir (str, optional):
            The local directory where the file will be saved. Defaults to the current working directory.
        load_file (bool, optional):
            If `True`, the function will import/read the file into a Python object after downloading. 
            If `False`, it only downloads the file to the disk. Default is `False`.
        virtual_stack (bool, optional):
            If `True`, the file is loaded as a virtual stack (lazy loading) if the format supports it. 
            This is recommended for large datasets to save memory. Default is `True`.
        scale (int, optional):
            Used only for Zarr/OME-Zarr stores when `load_file` is True. Specifies the resolution 
            level (pyramid scale) to load. 0 is full resolution. Default is 0.

    Returns:
        object:
            The downloaded data or file path. The specific return type depends on the parameters:

            * **str**: The local path to the file (if `load_file=False`).
            * **numpy.ndarray**: The full image data loaded into memory (if `load_file=True` and `virtual_stack=False`).
            * **dask.array.Array**: A lazy-loaded virtual stack (if `load_file=True` and `virtual_stack=True`).

    Example:
        ```python
        import qim3d

        downloader = qim3d.io.Downloader()

        # 1. Download a file from a URL without loading it
        path = downloader(
            url="[https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif](https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif)",
            output_dir=".",
            load_file=False
        )

        # 2. Download and load directly into memory (Numpy array)
        data = downloader(
            url="[https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif](https://archive.compute.dtu.dk/.../Cowry_DOWNSAMPLED.tif)",
            load_file=True,
            virtual_stack=False
        )
        ```
    """

    parsed = urlparse(url)
    fname = os.path.basename(parsed.path.rstrip('/'))
    dest = os.path.join(output_dir, fname)

    # --- Zarr / OME-Zarr store ---
    if fname.endswith(('.zarr', '.ome.zarr')):
        if os.path.exists(dest):
            log.warning(f'Zarr store already downloaded:\n{os.path.abspath(dest)}')
        else:
            log.info(f'Downloading Zarr store {fname}\n{url}')
            download(url, output_dir=output_dir)  # return always None
        if load_file:
            # If virtual stack == True --> dask array --> need to call False in load (we don't want call .compute())
            # If virtual stack == False --> numpy array --> need to call True in load (we want call .compute())
            log.info(
                f"\nLoading scale={scale} from {fname} as {'numpy array' if not virtual_stack else 'dask array'}"
            )
            return qim3d.io.import_ome_zarr(
                dest, scale=scale, load=not virtual_stack
            )
        return dest

    # --- Regular single file ---
    if os.path.exists(dest):
        log.warning(f'File already downloaded:\n{os.path.abspath(dest)}')
        if load_file:
            return load(path=dest, virtual_stack=virtual_stack)
        return dest
    else:
        log.info(f'Downloading file {fname}\n{url}')
        try:
            total = _get_file_size(url)
        except (HTTPError, URLError):
            total = None

        os.makedirs(output_dir, exist_ok=True)
        with tqdm(
            total=total, unit='B', unit_scale=True, unit_divisor=1024, ncols=80
        ) as pbar:
            try:
                urllib.request.urlretrieve(
                    url,
                    dest,
                    reporthook=lambda blocknum, bs, total: _update_progress(
                        pbar, blocknum, bs
                    ),
                )
            except HTTPError as http_err:
                msg = f'Failed to download {url!r}: server returned HTTP {http_err.code}'
                raise FileNotFoundError(msg) from http_err
            except URLError as url_err:
                msg = f'Failed to reach {url!r}: {url_err.reason}'
                raise ConnectionError(msg) from url_err

    if load_file:
        log.info(f'\nLoading {fname}')
        return load(path=dest, virtual_stack=virtual_stack)

    return dest

qim3d.io.Downloader.list_files

list_files()

Displays a catalog of all available datasets in the QIM repository.

This method prints a formatted list of folder names, file names, and file sizes to the console log. It is useful for exploring the inventory of biological and material science scans available for download without needing to visit the website.

The output groups files by their parent folder (e.g., 'Coal', 'Corals', 'Foam').

Source code in qim3d/io/_downloader.py
def list_files(self) -> None:
    """
    Displays a catalog of all available datasets in the QIM repository.

    This method prints a formatted list of folder names, file names, and file sizes to the console log. 
    It is useful for exploring the inventory of biological and material science scans available for 
    download without needing to visit the website.

    The output groups files by their parent folder (e.g., 'Coal', 'Corals', 'Foam').
    """

    url_dl = 'https://archive.compute.dtu.dk/download/public/projects/viscomp_data_repository'

    folders = _extract_names()

    for folder in folders:
        log.info(f'\n{ouf.boxtitle(folder, return_str=True)}')
        files = _extract_names(folder)

        for file in files:
            url = os.path.join(url_dl, folder, file).replace('\\', '/')
            file_size = _get_file_size(url)
            formatted_file = (
                f"{file[:-len(file.split('.')[-1])-1].replace('%20', '_')}"
            )
            formatted_size = _format_file_size(file_size)
            path_string = f'{folder}.{formatted_file}'

            log.info(f'{path_string:<50}({formatted_size})')

qim3d.io.load

load(
    path,
    virtual_stack=False,
    dataset_name=None,
    return_metadata=False,
    contains=None,
    force_load=False,
    dim_order=(2, 1, 0),
    progress_bar=False,
    display_memory_usage=False,
    **kwargs,
)

General-purpose function to load, import, or read 3D data from various file formats.

Automatically identifies the file format based on the extension and uses the appropriate backend to open the file. It supports a wide range of standard bio-imaging and material science formats.

Supported Formats:

  • Images: TIFF (.tif, .tiff), PIL supported formats (.png, .jpg, etc.)
  • Volumes: DICOM (.dcm), NIfTI (.nii, .nii.gz)
  • HDF5-based: HDF5 (.h5), TXRM/TXM/XRM (.txrm, .txm)
  • Raw/Binary: VGI/VOL (.vgi)
  • Cloud/Chunked: Zarr (.zarr)
  • Stacks: Directories containing sequences of 2D images (TIFF or DICOM series).

Memory Management (Virtual Stack):

For datasets that are too large to fit into RAM, use virtual_stack=True. This enables lazy loading (or memory mapping), allowing you to inspect metadata (shape, data type) and slice specific sub-regions of the volume without reading the entire file into memory immediately.

Parameters:

Name Type Description Default
path str or PathLike

The path to the file or directory to load. If a directory is provided, the function attempts to load it as a stack of images (e.g., a DICOM series or TIFF sequence).

required
virtual_stack bool

If True, enables lazy loading. The data is not read into memory until explicitly accessed. Useful for inspecting large files.

False
dataset_name str

Used only for HDF5 files containing multiple datasets. Specifies which internal dataset to load.

None
return_metadata bool

If True, returns a tuple (data, metadata). Currently supported for HDF5, NIfTI, and TXRM/TXM/XRM files.

False
contains str

Used when loading a directory of files (Stack). Specifies a unique substring that must be present in the filenames (e.g., "slice_"). Identifying files by this string helps avoid loading unrelated files in the same folder.

None
force_load bool

Safety override. If the file size exceeds available system memory, a MemoryError is raised. Set this to True to convert the error into a warning and attempt loading anyway.

False
dim_order tuple

Used for .vol files. Defines the dimension order (Z, Y, X).

(2, 1, 0)
progress_bar bool

If True, displays a progress bar during loading (Linux/POSIX only).

False
display_memory_usage bool

If True, prints a report of memory usage after loading.

False
**kwargs Any

Additional arguments passed to the underlying DataLoader class.

{}

Returns:

Name Type Description
vol ndarray, Virtual Stack Object, or tuple

The loaded data.

The specific type depends on the input parameters and file format:

  • numpy.ndarray: Standard in-memory array (if virtual_stack=False).
  • Virtual Stack Object: If virtual_stack=True, returns a lazy-loading object:
    • numpy.memmap (TIFF, VOL)
    • dask.array.Array (Zarr, TXRM)
    • h5py.Dataset (HDF5)
    • nibabel.arrayproxy.ArrayProxy (NIfTI)
  • tuple: (data, metadata) if return_metadata=True.

Raises:

Type Description
MemoryError

If the file is larger than available RAM and force_load=False.

ValueError

If the file format is not supported or the path is invalid.

Example
import qim3d

# 1. Simple load of a single 3D file
vol = qim3d.io.load("path/to/scan.tif")

# 2. Load a large file lazily (virtual stack) to check shape
vol_lazy = qim3d.io.load("path/to/huge_scan.h5", virtual_stack=True)
print(vol_lazy.shape) 

# 3. Load a specific dataset from an HDF5 file
vol = qim3d.io.load("data.h5", dataset_name="reconstruction")
Loading from Tiff stack

Volumes can also be loaded from a series of .tiff files. The stack means that we have one file per slice.

import qim3d

# Generate volume
vol = qim3d.generate.volume(noise_scale = 0.015)

# Save as a .tiff stack
# The paremeter `basename` is used for the prefix of the files.
qim3d.io.save("data_directory", vol, basename="blob-slices", sliced_dim=0)

# Load the volume from the .tiff stack
# Here we use `contains` to check the files that have that string in their names
loaded_vol = qim3d.io.load("data_directory" , contains="blob-slices", progress_bar=True)
Source code in qim3d/io/_loading.py
def load(
    path: str | os.PathLike,
    virtual_stack: bool = False,
    dataset_name: bool = None,
    return_metadata: bool = False,
    contains: bool = None,
    force_load: bool = False,
    dim_order: tuple = (2, 1, 0),
    progress_bar: bool = False,
    display_memory_usage: bool = False,
    **kwargs,
) -> np.ndarray:
    """
    General-purpose function to load, import, or read 3D data from various file formats.

    Automatically identifies the file format based on the extension and uses the appropriate backend to open the file. 
    It supports a wide range of standard bio-imaging and material science formats.

    **Supported Formats:**

    * **Images:** TIFF (`.tif`, `.tiff`), PIL supported formats (`.png`, `.jpg`, etc.)
    * **Volumes:** DICOM (`.dcm`), NIfTI (`.nii`, `.nii.gz`)
    * **HDF5-based:** HDF5 (`.h5`), TXRM/TXM/XRM (`.txrm`, `.txm`)
    * **Raw/Binary:** VGI/VOL (`.vgi`)
    * **Cloud/Chunked:** Zarr (`.zarr`)
    * **Stacks:** Directories containing sequences of 2D images (TIFF or DICOM series).

    **Memory Management (Virtual Stack):**

    For datasets that are too large to fit into RAM, use `virtual_stack=True`. This enables 
    **lazy loading** (or memory mapping), allowing you to inspect metadata (shape, data type) 
    and slice specific sub-regions of the volume without reading the entire file into memory immediately.

    Args:
        path (str or os.PathLike): 
            The path to the file or directory to load. If a directory is provided, the function 
            attempts to load it as a stack of images (e.g., a DICOM series or TIFF sequence).
        virtual_stack (bool, optional): 
            If `True`, enables lazy loading. The data is not read into memory until explicitly accessed. 
            Useful for inspecting large files.
        dataset_name (str, optional): 
            Used only for HDF5 files containing multiple datasets. Specifies which internal dataset 
            to load.
        return_metadata (bool, optional): 
            If `True`, returns a tuple `(data, metadata)`. Currently supported for HDF5, NIfTI, 
            and TXRM/TXM/XRM files.
        contains (str, optional): 
            Used when loading a directory of files (Stack). Specifies a unique substring that must 
            be present in the filenames (e.g., "slice_"). Identifying files by this string helps 
            avoid loading unrelated files in the same folder.
        force_load (bool, optional): 
            Safety override. If the file size exceeds available system memory, a `MemoryError` 
            is raised. Set this to `True` to convert the error into a warning and attempt loading anyway.
        dim_order (tuple, optional): 
            Used for `.vol` files. Defines the dimension order (Z, Y, X).
        progress_bar (bool, optional): 
            If `True`, displays a progress bar during loading (Linux/POSIX only).
        display_memory_usage (bool, optional): 
            If `True`, prints a report of memory usage after loading.
        **kwargs (Any): 
            Additional arguments passed to the underlying `DataLoader` class.

    Returns:
        vol (ndarray, Virtual Stack Object, or tuple):
            The loaded data.

            The specific type depends on the input parameters and file format:

            * **numpy.ndarray**: Standard in-memory array (if `virtual_stack=False`).
            * **Virtual Stack Object**: If `virtual_stack=True`, returns a lazy-loading object:
                * `numpy.memmap` (TIFF, VOL)
                * `dask.array.Array` (Zarr, TXRM)
                * `h5py.Dataset` (HDF5)
                * `nibabel.arrayproxy.ArrayProxy` (NIfTI)
            * **tuple**: `(data, metadata)` if `return_metadata=True`.

    Raises:
        MemoryError: If the file is larger than available RAM and `force_load=False`.
        ValueError: If the file format is not supported or the path is invalid.

    Example:
        ```python
        import qim3d

        # 1. Simple load of a single 3D file
        vol = qim3d.io.load("path/to/scan.tif")

        # 2. Load a large file lazily (virtual stack) to check shape
        vol_lazy = qim3d.io.load("path/to/huge_scan.h5", virtual_stack=True)
        print(vol_lazy.shape) 

        # 3. Load a specific dataset from an HDF5 file
        vol = qim3d.io.load("data.h5", dataset_name="reconstruction")
        ```

    Example: Loading from Tiff stack
        Volumes can also be loaded from a series of `.tiff` files. The stack means that we have one file per slice.

        ```python
        import qim3d

        # Generate volume
        vol = qim3d.generate.volume(noise_scale = 0.015)

        # Save as a .tiff stack
        # The paremeter `basename` is used for the prefix of the files.
        qim3d.io.save("data_directory", vol, basename="blob-slices", sliced_dim=0)

        # Load the volume from the .tiff stack
        # Here we use `contains` to check the files that have that string in their names
        loaded_vol = qim3d.io.load("data_directory" , contains="blob-slices", progress_bar=True)
        ```
    """

    loader = DataLoader(
        virtual_stack=virtual_stack,
        dataset_name=dataset_name,
        return_metadata=return_metadata,
        contains=contains,
        force_load=force_load,
        dim_order=dim_order,
        **kwargs,
    )

    if progress_bar and os.name == 'posix':
        with FileLoadingProgressBar(path):
            data = loader.load(path)
    else:
        data = loader.load(path)

    def log_memory_info(data: np.ndarray) -> None:
        mem = Memory()
        log.info(
            'Volume using %s of memory\n',
            sizeof(data[0].nbytes if isinstance(data, tuple) else data.nbytes),
        )
        mem.report()

    if return_metadata and not isinstance(data, tuple):
        log.warning('The file format does not contain metadata')

    if not virtual_stack:
        if display_memory_usage:
            log_memory_info(data)
    else:
        # Only log if file type is not a np.ndarray, i.e., it is some kind of memmap object
        if not isinstance(
            type(data[0]) if isinstance(data, tuple) else type(data), np.ndarray
        ):
            log.info('Using virtual stack')
        else:
            log.warning('Virtual stack is not supported for this file format')
            if display_memory_usage:
                log_memory_info(data)

    return data

qim3d.io.save

save(
    path,
    data,
    replace=False,
    compression=False,
    basename=None,
    sliced_dim=0,
    chunk_shape='auto',
    **kwargs,
)

General-purpose function to save, export, or write 3D data to disk.

Automatically detects the desired file format based on the filename extension. Supports saving entire volumes as single files or exporting them as a sequence of 2D slices (stack).

Supported Formats:

  • Images: TIFF (.tif), standard image formats (.png, .jpg, .bmp, etc.)
  • Volumes: TIFF (.tif), NIfTI (.nii, .nii.gz), HDF5 (.h5)
  • Cloud/Chunked: Zarr (.zarr)
  • Medical: DICOM (.dcm) - Saves as a series if a directory is provided.

Parameters:

Name Type Description Default
path str or PathLike

The destination path. Can be a filename (e.g., volume.tif) or a directory (e.g., slices/) if saving a stack.

required
data ndarray or Array

The 2D or 3D image data to be saved.

required
replace bool

If True, overwrites the file if it already exists. If False, raises a FileExistsError to prevent accidental data loss.

False
compression bool

If True, applies lossless compression (e.g., Deflate for HDF5/TIFF) to reduce file size.

False
basename str

Used when saving a 3D volume as a stack of 2D files. Defines the common prefix for the filenames (e.g., basename="slice" results in slice_000.tif, slice_001.tif).

None
sliced_dim int

The dimension along which to slice the volume when saving as a stack. Usually 0 (Z-axis).

0
**kwargs Any

Additional arguments passed to the specific backend saver.

{}

Raises:

Type Description
FileExistsError

If replace=False and the destination already exists.

ValueError

If the file extension is not supported.

Example
import qim3d
import numpy as np

# Generate sample data
vol = qim3d.generate.volume(noise_scale=0.015)

# 1. Save as a single 3D TIFF file
qim3d.io.save("output_volume.tif", vol)

# 2. Save as an HDF5 file with compression
qim3d.io.save("output_volume.h5", vol, compression=True)
Saving as a Stack of Slices

To save a 3D volume as individual 2D images (e.g., for inspection in standard image viewers), provide a directory path and a basename.

import qim3d

vol = qim3d.generate.volume()

# Save as slice_000.tif, slice_001.tif, etc. inside the 'slices' folder
qim3d.io.save("slices_folder", vol, basename="slice", sliced_dim=0)
Source code in qim3d/io/_saving.py
def save(
    path: str | os.PathLike,
    data: np.ndarray,
    replace: bool = False,
    compression: bool = False,
    basename: bool = None,
    sliced_dim: int = 0,
    chunk_shape: str = 'auto',
    **kwargs,
) -> None:
    """
    General-purpose function to save, export, or write 3D data to disk.

    Automatically detects the desired file format based on the filename extension. 
    Supports saving entire volumes as single files or exporting them as a sequence 
    of 2D slices (stack).

    **Supported Formats:**

    * **Images:** TIFF (`.tif`), standard image formats (`.png`, `.jpg`, `.bmp`, etc.)
    * **Volumes:** TIFF (`.tif`), NIfTI (`.nii`, `.nii.gz`), HDF5 (`.h5`)
    * **Cloud/Chunked:** Zarr (`.zarr`)
    * **Medical:** DICOM (`.dcm`) - *Saves as a series if a directory is provided.*

    Args:
        path (str or os.PathLike): 
            The destination path. Can be a filename (e.g., `volume.tif`) or a directory 
            (e.g., `slices/`) if saving a stack.
        data (numpy.ndarray or dask.array.Array): 
            The 2D or 3D image data to be saved.
        replace (bool, optional): 
            If `True`, overwrites the file if it already exists. If `False`, raises a 
            `FileExistsError` to prevent accidental data loss.
        compression (bool, optional): 
            If `True`, applies lossless compression (e.g., Deflate for HDF5/TIFF) to reduce file size.
        basename (str, optional): 
            Used when saving a 3D volume as a stack of 2D files. Defines the common prefix 
            for the filenames (e.g., `basename="slice"` results in `slice_000.tif`, `slice_001.tif`).
        sliced_dim (int, optional): 
            The dimension along which to slice the volume when saving as a stack. 
            Usually `0` (Z-axis).
        **kwargs (Any): 
            Additional arguments passed to the specific backend saver.


    Raises:
        FileExistsError: If `replace=False` and the destination already exists.
        ValueError: If the file extension is not supported.

    Example:
        ```python
        import qim3d
        import numpy as np

        # Generate sample data
        vol = qim3d.generate.volume(noise_scale=0.015)

        # 1. Save as a single 3D TIFF file
        qim3d.io.save("output_volume.tif", vol)

        # 2. Save as an HDF5 file with compression
        qim3d.io.save("output_volume.h5", vol, compression=True)
        ```

    Example: Saving as a Stack of Slices
        To save a 3D volume as individual 2D images (e.g., for inspection in standard image viewers), 
        provide a directory `path` and a `basename`.

        ```python
        import qim3d

        vol = qim3d.generate.volume()

        # Save as slice_000.tif, slice_001.tif, etc. inside the 'slices' folder
        qim3d.io.save("slices_folder", vol, basename="slice", sliced_dim=0)
        ```
    """

    DataSaver(
        replace=replace,
        compression=compression,
        basename=basename,
        sliced_dim=sliced_dim,
        chunk_shape=chunk_shape,
        **kwargs,
    ).save(path, data)

qim3d.io.export_ome_zarr

export_ome_zarr(
    path,
    data,
    chunk_size=256,
    downsample_rate=2,
    order=1,
    replace=False,
    method='scaleZYXdask_coarsen',
    progress_bar=True,
    progress_bar_repeat_time='auto',
)

Exports 3D data to the OME-Zarr (NGFF) format with multi-scale pyramidal levels.

Generates a Next Generation File Format (NGFF) representation of the input volume. This format creates a multi-resolution pyramid (downsampled copies), allowing for efficient visualization and streaming of large datasets over networks or the cloud.

Key Features:

  • Chunking: Data is divided into small blocks (chunk_size) for efficient random access.
  • Pyramidal Levels: Automatically calculates and generates lower-resolution levels until the dataset fits within a single chunk.
  • Dask Integration: efficiently handles larger-than-memory datasets by processing chunks in parallel.

Parameters:

Name Type Description Default
path str or PathLike

The destination directory path. The directory will be created as a Zarr group. (E.g. "data.zarr").

required
data ndarray or Array

The 3D image volume to export.

required
chunk_size int

The size of the chunks (cubes) for storage (e.g., 256 creates 256x256x256 blocks). Smaller chunks improve access time for specific regions but increase file count.

256
downsample_rate int

The reduction factor between pyramid levels. A rate of 2 means each level is half the size of the previous one.

2
order int

The interpolation order for downsampling. 0 = Nearest Neighbor (faster) and 1 = Linear.

1
replace bool

If True, deletes the existing directory at path before writing. If False, raises an error if the directory exists.

False
method str

The downsampling strategy. "scaleZYXdask_coarsen" uses block averaging (faster, reduces noise). "scaleZYXdask" uses interpolation (slower, potentially sharper).

'scaleZYXdask_coarsen'
progress_bar bool

If True, displays a progress bar tracking the chunk writing process.

True
progress_bar_repeat_time str or int

Interval in seconds for updating the progress bar.

'auto'

Raises:

Type Description
ValueError

If path exists and replace=False.

ValueError

If downsample_rate <= 1.

Example
import qim3d

# Load a sample dataset
downloader = qim3d.io.Downloader()
data = downloader.Snail.Escargot(load_file=True)

# Export to OME-Zarr with 2x downsampling per level
qim3d.io.export_ome_zarr("Escargot.zarr", data, chunk_size=128, downsample_rate=2)
Source code in qim3d/io/_ome_zarr.py
def export_ome_zarr(
    path: str | os.PathLike,
    data: np.ndarray | da.core.Array,
    chunk_size: int = 256,
    downsample_rate: int = 2,
    order: int = 1,
    replace: bool = False,
    method: str = 'scaleZYXdask_coarsen',
    progress_bar: bool = True,
    progress_bar_repeat_time: str = 'auto',
) -> None:
    """
    Exports 3D data to the OME-Zarr (NGFF) format with multi-scale pyramidal levels.

    Generates a **Next Generation File Format (NGFF)** representation of the input volume. 
    This format creates a multi-resolution pyramid (downsampled copies), allowing for efficient 
    visualization and streaming of large datasets over networks or the cloud.

    **Key Features:**

    * **Chunking:** Data is divided into small blocks (`chunk_size`) for efficient random access.
    * **Pyramidal Levels:** Automatically calculates and generates lower-resolution levels 
      until the dataset fits within a single chunk.
    * **Dask Integration:** efficiently handles larger-than-memory datasets by processing chunks in parallel.

    Args:
        path (str or os.PathLike): 
            The destination directory path. The directory will be created as a Zarr group. (E.g. `"data.zarr"`).
        data (numpy.ndarray or dask.array.Array): 
            The 3D image volume to export.
        chunk_size (int, optional): 
            The size of the chunks (cubes) for storage (e.g., `256` creates 256x256x256 blocks). 
            Smaller chunks improve access time for specific regions but increase file count.
        downsample_rate (int, optional): 
            The reduction factor between pyramid levels. A rate of `2` means each level is 
            half the size of the previous one.
        order (int, optional): 
            The interpolation order for downsampling. `0` = Nearest Neighbor (faster) and `1` = Linear.
        replace (bool, optional): 
            If `True`, deletes the existing directory at `path` before writing. 
            If `False`, raises an error if the directory exists.
        method (str, optional): 
            The downsampling strategy. 
            `"scaleZYXdask_coarsen"` uses block averaging (faster, reduces noise). 
            `"scaleZYXdask"` uses interpolation (slower, potentially sharper).
        progress_bar (bool, optional): 
            If `True`, displays a progress bar tracking the chunk writing process.
        progress_bar_repeat_time (str or int, optional): 
            Interval in seconds for updating the progress bar.

    Raises:
        ValueError: If `path` exists and `replace=False`.
        ValueError: If `downsample_rate` <= 1.

    Example:
        ```python
        import qim3d

        # Load a sample dataset
        downloader = qim3d.io.Downloader()
        data = downloader.Snail.Escargot(load_file=True)

        # Export to OME-Zarr with 2x downsampling per level
        qim3d.io.export_ome_zarr("Escargot.zarr", data, chunk_size=128, downsample_rate=2)
        ```
    """

    # Check if directory exists
    if os.path.exists(path):
        if replace:
            shutil.rmtree(path)
        else:
            raise ValueError(
                f'Directory {path} already exists. Use replace=True to overwrite.'
            )

    # Check if downsample_rate is valid
    if downsample_rate <= 1:
        raise ValueError('Downsample rate must be greater than 1.')

    log.info(f'Exporting data to OME-Zarr format at {path}')

    # Get the number of scales
    min_dim = np.max(np.shape(data))
    nscales = math.ceil(math.log(min_dim / chunk_size) / math.log(downsample_rate))
    log.info(f'Number of scales: {nscales + 1}')

    # Create scaler
    scaler = OMEScaler(
        downscale=downsample_rate, max_layer=nscales, method=method, order=order
    )

    # write the image data
    os.mkdir(path)
    store = parse_url(path, mode='w').store
    root = zarr.group(store=store)

    # Check if we want to process using Dask
    if 'dask' in method and not isinstance(data, da.Array):
        log.info('\nConverting input data to Dask array')
        data = da.from_array(data, chunks=(chunk_size, chunk_size, chunk_size))
        log.info(f' - shape...: {data.shape}\n - chunks..: {data.chunksize}\n')

    elif 'dask' in method and isinstance(data, da.Array):
        log.info('\nInput data will be rechunked')
        data = data.rechunk((chunk_size, chunk_size, chunk_size))
        log.info(f' - shape...: {data.shape}\n - chunks..: {data.chunksize}\n')

    log.info('Calculating the multi-scale pyramid')

    # Generate multi-scale pyramid
    mip = scaler.func(data)

    log.info('Writing data to disk')
    kwargs = dict(
        pyramid=mip,
        group=root,
        fmt=CurrentFormat(),
        axes='zyx',
        name=None,
        compute=True,
        storage_options=dict(chunks=(chunk_size, chunk_size, chunk_size)),
    )
    if progress_bar:

        # Get number of chunks for each shape and sum them together
        n_chunks = sum([np.prod(np.ceil(np.array(scaled_data.shape)/chunk_size)) for scaled_data in mip])

        with OmeZarrExportProgressBar(
            path=path, n_chunks=n_chunks, reapeat_time=progress_bar_repeat_time
        ):
            write_multiscale(**kwargs)
    else:
        write_multiscale(**kwargs)

    log.info('\nAll done!')

    return

qim3d.io.import_ome_zarr

import_ome_zarr(path, scale=0, load=True)

Imports or reads image data from an OME-Zarr (NGFF) container.

Allows reading specific resolution levels from a multi-scale dataset. This is particularly useful for previewing large datasets by loading a coarse scale before fetching the full-resolution data.

Parameters:

Name Type Description Default
path str or PathLike

The path to the OME-Zarr file (directory).

required
scale int or str

The resolution level to load. 0 is the full resolution (finest). Higher integers are progressively lower resolutions. Can also accept 'highest' (alias for 0) or 'lowest' (coarsest available scale).

0
load bool

If True, reads the data into memory as a numpy.ndarray. If False, returns a dask.array.Array for lazy loading/processing.

True

Returns:

Name Type Description
vol ndarray or Array

The requested image data.

  • numpy.ndarray: The full image data loaded into memory (if load=True).
  • dask.array.Array: A lazy-loaded Dask array connected to the Zarr store (if load=False).

Raises: ValueError: If the requested scale index exceeds the available levels in the dataset.

Example
import qim3d

# 1. Load the full resolution data into memory
data = qim3d.io.import_ome_zarr("Escargot.zarr", scale=0, load=True)

# 2. Lazy load the lowest resolution (thumbnail/preview)
preview_lazy = qim3d.io.import_ome_zarr("Escargot.zarr", scale='lowest', load=False)
print(preview_lazy.shape)
Source code in qim3d/io/_ome_zarr.py
def import_ome_zarr(
    path: str | os.PathLike, scale: int = 0, load: bool = True
) -> np.ndarray:
    """
    Imports or reads image data from an OME-Zarr (NGFF) container.

    Allows reading specific resolution levels from a multi-scale dataset. This is particularly 
    useful for previewing large datasets by loading a coarse scale before fetching the full-resolution data.

    Args:
        path (str or os.PathLike): 
            The path to the OME-Zarr file (directory).
        scale (int or str, optional): 
            The resolution level to load.
            `0` is the full resolution (finest). Higher integers are progressively lower resolutions.
            Can also accept `'highest'` (alias for 0) or `'lowest'` (coarsest available scale).
        load (bool, optional): 
            If `True`, reads the data into memory as a `numpy.ndarray`. 
            If `False`, returns a `dask.array.Array` for lazy loading/processing.

    Returns:
        vol (numpy.ndarray or dask.array.Array):
            The requested image data.

            * **numpy.ndarray**: The full image data loaded into memory (if `load=True`).
            * **dask.array.Array**: A lazy-loaded Dask array connected to the Zarr store (if `load=False`).
    Raises:
        ValueError: If the requested `scale` index exceeds the available levels in the dataset.

    Example:
        ```python
        import qim3d

        # 1. Load the full resolution data into memory
        data = qim3d.io.import_ome_zarr("Escargot.zarr", scale=0, load=True)

        # 2. Lazy load the lowest resolution (thumbnail/preview)
        preview_lazy = qim3d.io.import_ome_zarr("Escargot.zarr", scale='lowest', load=False)
        print(preview_lazy.shape)
        ```
    """

    # read the image data
    # store = parse_url(path, mode="r").store

    reader = Reader(parse_url(path))
    nodes = list(reader())
    image_node = nodes[0]
    dask_data = image_node.data

    log.info(f'Data contains {len(dask_data)} scales:')
    for i in np.arange(len(dask_data)):
        log.info(f'- Scale {i}: {dask_data[i].shape}')

    if scale == 'highest':
        scale = 0

    if scale == 'lowest':
        scale = len(dask_data) - 1

    if scale >= len(dask_data):
        raise ValueError(
            f'Scale {scale} does not exist in the data. Please choose a scale between 0 and {len(dask_data)-1}.'
        )

    log.info(f'\nLoading scale {scale} with shape {dask_data[scale].shape}')

    if load:
        vol = dask_data[scale].compute()
    else:
        vol = dask_data[scale]

    return vol

qim3d.io.load_mesh

load_mesh(filename)

Imports or loads a 3D surface mesh from a file.

Reads geometry data from standard mesh formats and returns a PyGEL3D Manifold object. This is useful for analyzing surface structures, isosurfaces, or exported models.

Supported Formats:

  • OBJ (.obj) - Wavefront Object
  • PLY (.ply) - Polygon File Format
  • OFF (.off) - Object File Format
  • X3D (.x3d) - Extensible 3D

Parameters:

Name Type Description Default
filename str or PathLike

The path to the mesh file to load.

required

Returns:

Name Type Description
mesh Manifold or None

A PyGEL3D object containing the mesh geometry (vertices, faces, edges). Returns None if loading fails.

Example
import qim3d

# Load a mesh file
mesh = qim3d.io.load_mesh("path/to/model.obj")

# Check number of vertices
print(len(mesh.vertices()))
Source code in qim3d/io/_loading.py
def load_mesh(filename: str) -> hmesh.Manifold:
    """
    Imports or loads a 3D surface mesh from a file.

    Reads geometry data from standard mesh formats and returns a PyGEL3D Manifold object. 
    This is useful for analyzing surface structures, isosurfaces, or exported models.

    **Supported Formats:**

    * **OBJ** (`.obj`) - Wavefront Object
    * **PLY** (`.ply`) - Polygon File Format
    * **OFF** (`.off`) - Object File Format
    * **X3D** (`.x3d`) - Extensible 3D

    Args:
        filename (str or os.PathLike): 
            The path to the mesh file to load.

    Returns:
        mesh (Manifold or None): 
            A PyGEL3D object containing the mesh geometry (vertices, faces, edges). 
            Returns `None` if loading fails.

    Example:
        ```python
        import qim3d

        # Load a mesh file
        mesh = qim3d.io.load_mesh("path/to/model.obj")

        # Check number of vertices
        print(len(mesh.vertices()))
        ```
    """
    mesh = hmesh.load(filename)

    return mesh

qim3d.io.save_mesh

save_mesh(filename, mesh)

Exports or saves a 3D surface mesh to a file.

Writes the geometry of a PyGEL3D Manifold object to disk. The output format is determined automatically by the filename extension.

Supported Formats:

  • OBJ (.obj)
  • OFF (.off)
  • X3D (.x3d)

Parameters:

Name Type Description Default
filename str or PathLike

The destination path. The extension determines the format.

required
mesh Manifold

The mesh object containing the geometry to be saved.

required
Example
import qim3d

# Generate a volume and extract a mesh (isosurface)
vol = qim3d.generate.volume(noise_scale=0.05)
mesh = qim3d.mesh.from_volume(vol)

# Save the mesh to an OBJ file
qim3d.io.save_mesh("surface.obj", mesh)
Source code in qim3d/io/_saving.py
def save_mesh(filename: str, mesh: hmesh.Manifold) -> None:
    """
    Exports or saves a 3D surface mesh to a file.

    Writes the geometry of a PyGEL3D Manifold object to disk. The output format is 
    determined automatically by the filename extension.

    **Supported Formats:**

    * **OBJ** (`.obj`)
    * **OFF** (`.off`)
    * **X3D** (`.x3d`)

    Args:
        filename (str or os.PathLike): 
            The destination path. The extension determines the format.
        mesh (hmesh.Manifold): 
            The mesh object containing the geometry to be saved.

    Example:
        ```python
        import qim3d

        # Generate a volume and extract a mesh (isosurface)
        vol = qim3d.generate.volume(noise_scale=0.05)
        mesh = qim3d.mesh.from_volume(vol)

        # Save the mesh to an OBJ file
        qim3d.io.save_mesh("surface.obj", mesh)
        ```
    """
    # Export the mesh to the specified filename
    hmesh.save(filename, mesh)