Please, help us to better serve our user community by answering the following short survey: https://www.hdfgroup.org/website-survey/
HDF5 1.14.6.08405a5
API Reference
Loading...
Searching...
No Matches
Command-line Tools For Viewing HDF5 Files

Navigate back: Main / Getting Started with HDF5 / Command-line Tools


Contents

File Content and Structure

The h5dump and h5ls tools can both be used to view the contents of an HDF5 file. The tools are discussed below:

h5dump

The h5dump tool dumps or displays the contents of an HDF5 file (textually). By default if you specify no options, h5dump will display the entire contents of a file. There are many h5dump options for examining specific details of a file. To see all of the available h5dump options, specify the -h or –help option:

h5dump -h

The following h5dump options can be helpful in viewing the content and structure of a file:

Option Description Comment
-n, –contents Displays a list of the objects in a file See Example 1
-n 1, –contents=1 Displays a list of the objects and attributes in a file See Example 6
-H, –header Displays header information only (no data) See Example 2
-A 0, –onlyattr=0 Suppresses the display of attributes See Example 2
-N P, –any_path=P Displays any object or attribute that matches path P See Example 6

Example 1

The following command displays a list of the objects in the file OMI-Aura.he5 (an HDF-EOS5 file):

h5dump -n OMI-Aura.he5

As shown in the output below, the objects (groups, datasets) are listed to the left, followed by their names. You can see that this file contains two root groups, HDFEOS and HDFEOS INFORMATION:

HDF5 "OMI-Aura.he5" {
FILE_CONTENTS {
group /
group /HDFEOS
group /HDFEOS/ADDITIONAL
group /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES
group /HDFEOS/GRIDS
group /HDFEOS/GRIDS/OMI Column Amount O3
group /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields
dataset /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/ColumnAmountO3
dataset /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/RadiativeCloudFraction
dataset /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle
dataset /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/ViewingZenithAngle
group /HDFEOS INFORMATION
dataset /HDFEOS INFORMATION/StructMetadata.0
}
}

Example 2

The file structure of the OMI-Aura.he5 file can be seen with the following command. The -A 0 option suppresses the display of attributes:

h5dump -H -A 0 OMI-Aura.he5

Output of this command is shown below:

HDF5 "OMI-Aura.he5" {
GROUP "/" {
GROUP "HDFEOS" {
GROUP "ADDITIONAL" {
GROUP "FILE_ATTRIBUTES" {
}
}
GROUP "GRIDS" {
GROUP "OMI Column Amount O3" {
GROUP "Data Fields" {
DATASET "ColumnAmountO3" {
DATATYPE H5T_IEEE_F32LE
DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
}
DATASET "RadiativeCloudFraction" {
DATATYPE H5T_IEEE_F32LE
DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
}
DATASET "SolarZenithAngle" {
DATATYPE H5T_IEEE_F32LE
DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
}
DATASET "ViewingZenithAngle" {
DATATYPE H5T_IEEE_F32LE
DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
}
}
}
}
}
GROUP "HDFEOS INFORMATION" {
DATASET "StructMetadata.0" {
DATATYPE H5T_STRING {
STRSIZE 32000;
CTYPE H5T_C_S1;
}
DATASPACE SCALAR
}
}
}
}
@ H5T_CSET_ASCII
Definition H5Tpublic.h:95
@ H5T_STRING
Definition H5Tpublic.h:35
@ H5T_STR_NULLTERM
Definition H5Tpublic.h:120
#define H5T_IEEE_F32LE
Definition H5Tpublic.h:272
#define H5T_C_S1
Definition H5Tpublic.h:489

h5ls

The h5ls tool by default just displays the objects in the root group. It will not display items in groups beneath the root group unless specified. Useful h5ls options for viewing file content and structure are:

Option Description Comment
-r Lists all groups and objects recursively See Example 3
-v Generates verbose output (lists dataset properties, attributes and attribute values, but no dataset values)

Example 3

The following command shows the contents of the HDF-EOS5 file OMI-Aura.he5. The output is similar to h5dump, except that h5ls also shows dataspace information for each dataset:

h5ls -r OMI-Aura.he5

The output is shown below:

/ Group
/HDFEOS Group
/HDFEOS/ADDITIONAL Group
/HDFEOS/ADDITIONAL/FILE_ATTRIBUTES Group
/HDFEOS/GRIDS Group
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3 Group
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields Group
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ColumnAmountO3 Dataset {720, 1440}
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/RadiativeCloudFraction Dataset {720, 1440}
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/SolarZenithAngle Dataset {720, 1440}
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ViewingZenithAngle Dataset {720, 1440}
/HDFEOS\ INFORMATION Group
/HDFEOS\ INFORMATION/StructMetadata.0 Dataset {SCALAR}

Datasets and Dataset Properties

Both h5dump and h5ls can be used to view specific datasets.

h5dump

Useful h5dump options for examining specific datasets include:

Option Description Comment
-d D, –dataset=D Displays dataset D See Example 4
-H, –header Displays header information only See Example 4
-p, –properties Displays dataset filters, storage layout, and fill value properties See Example 5
-A 0, –onlyattr=0 Suppresses the display of attributes See Example 2
-N P, –any_path=P Displays any object or attribute that matches path P See Example 6

Example 4

A specific dataset can be viewed with h5dump using the -d D option and specifying the entire path and name of the dataset for D. The path is important in identifying the correct dataset, as there can be multiple datasets with the same name. The path can be determined by looking at the objects in the file with h5dump -n.

The following example uses the groups.h5 file that is created by the Examples from Learning the Basics example h5_crtgrpar.c. To display dset1 in the groups.h5 file below, specify dataset /MyGroup/dset1. The -H option is used to suppress printing of the data values:

Contents of groups.h5

$ h5dump -n groups.h5
HDF5 "groups.h5" {
FILE_CONTENTS {
group /
group /MyGroup
group /MyGroup/Group_A
dataset /MyGroup/Group_A/dset2
group /MyGroup/Group_B
dataset /MyGroup/dset1
}
}

Display dataset "dset1"

$ h5dump -d "/MyGroup/dset1" -H groups.h5
HDF5 "groups.h5" {
DATASET "/MyGroup/dset1" {
DATATYPE H5T_STD_I32BE
DATASPACE SIMPLE { ( 3, 3 ) / ( 3, 3 ) }
}
}
#define H5T_STD_I32BE
Definition H5Tpublic.h:318

Example 5

The -p option is used to examine the dataset filters, storage layout, and fill value properties of a dataset.

This option can be useful for checking how well compression works, or even for analyzing performance and dataset size issues related to chunking. (The smaller the chunk size, the more chunks that HDF5 has to keep track of, which increases the size of the file and potentially affects performance.)

In the file shown below the dataset /DS1 is both chunked and compressed:

$ h5dump -H -p -d "/DS1" h5ex_d_gzip.h5
HDF5 "h5ex_d_gzip.h5" {
DATASET "/DS1" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 32, 64 ) / ( 32, 64 ) }
STORAGE_LAYOUT {
CHUNKED ( 4, 8 )
SIZE 5278 (1.552:1 COMPRESSION)
}
FILTERS {
COMPRESSION DEFLATE { LEVEL 9 }
}
FILLVALUE {
VALUE 0
}
ALLOCATION_TIME {
}
}
}
@ H5D_FILL_TIME_IFSET
Definition H5Dpublic.h:104
@ H5D_ALLOC_TIME_INCR
Definition H5Dpublic.h:79
#define H5T_STD_I32LE
Definition H5Tpublic.h:323

You can obtain the h5ex_d_gzip.c program that created this file, as well as the file created, from the Examples by API page.

h5ls

Specific datasets can be specified with h5ls by simply adding the dataset path and dataset after the file name. As an example, this command displays dataset dset2 in the groups.h5 file used in Example 4 :

h5ls groups.h5/MyGroup/Group_A/dset2

Just the dataspace information gets displayed:

dset2 Dataset {2, 10}

The following options can be used to see detailed information about a dataset.

Option Description
-v, –verbose Generates verbose output (lists dataset properties, attributes and attribute values, but no dataset values)
-d, –data Displays dataset values

The output of using -v is shown below:

$ h5ls -v groups.h5/MyGroup/Group_A/dset2
Opened "groups.h5" with sec2 driver.
dset2 Dataset {2/2, 10/10}
Location: 1:3840
Links: 1
Storage: 80 logical bytes, 80 allocated bytes, 100.00% utilization
Type: 32-bit big-endian integer

The output of using -d is shown below:

$ h5ls -d groups.h5/MyGroup/Group_A/dset2
dset2 Dataset {2, 10}
Data:
(0,0) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

Groups

Both h5dump and h5ls can be used to view specific groups in a file.

h5dump

The h5dump options that are useful for examining groups are:

Option Description
-g G, –group=G Displays group G and its members
-H, –header Displays header information only
-A 0, –onlyattr=0 Suppresses the display of attributes

To view the contents of the HDFEOS group in the OMI file mentioned previously, you can specify the path and name of the group as follows:

h5dump -g "/HDFEOS" -H -A 0 OMI-Aura.he5

The -A 0 option suppresses attributes and -H suppresses printing of data values:

HDF5 "OMI-Aura.he5" {
GROUP "/HDFEOS" {
GROUP "ADDITIONAL" {
GROUP "FILE_ATTRIBUTES" {
}
}
GROUP "GRIDS" {
GROUP "OMI Column Amount O3" {
GROUP "Data Fields" {
DATASET "ColumnAmountO3" {
DATATYPE H5T_IEEE_F32LE
DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
}
DATASET "RadiativeCloudFraction" {
DATATYPE H5T_IEEE_F32LE
DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
}
DATASET "SolarZenithAngle" {
DATATYPE H5T_IEEE_F32LE
DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
}
DATASET "ViewingZenithAngle" {
DATATYPE H5T_IEEE_F32LE
DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
}
}
}
}
}
}

h5ls

You can view the contents of a group with h5ls/ by specifying the group after the file name. To use h5ls to view the contents of the /HDFEOS group in the OMI-Aura.he5 file, type:

h5ls -r OMI-Aura.he5/HDFEOS

The output of this command is:

/ADDITIONAL Group
/ADDITIONAL/FILE_ATTRIBUTES Group
/GRIDS Group
/GRIDS/OMI\ Column\ Amount\ O3 Group
/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields Group
/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ColumnAmountO3 Dataset {720, 1440}
/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/RadiativeCloudFraction Dataset {720, 1440}
/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/SolarZenithAngle Dataset {720, 1440}
/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ViewingZenithAngle Dataset {720, 1440}

If you specify the -v option, you can also see the attributes and properties of the datasets.

Attributes

h5dump

Attributes are displayed by default if using h5dump. Some files contain many attributes, which can make it difficult to examine the objects in the file. Shown below are options that can help when using h5dump to work with files that have attributes.

Example 6

The -a A option will display an attribute. However, the path to the attribute must be included when specifying this option. For example, to see the ScaleFactor attribute in the OMI-Aura.he5 file, type:

h5dump -a "/HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle/ScaleFactor" OMI-Aura.he5

This command displays:

HDF5 "OMI-Aura.he5" {
ATTRIBUTE "ScaleFactor" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
DATA {
(0): 1
}
}
}
#define H5T_IEEE_F64LE
Definition H5Tpublic.h:282

How can you determine the path to the attribute? This can be done by looking at the file contents with the -n 1 option:

h5dump -n 1 OMI-Aura.he5

Below is a portion of the output for this command:

HDF5 "OMI-Aura.he5" {
FILE_CONTENTS {
group /
group /HDFEOS
group /HDFEOS/ADDITIONAL
group /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/EndUTC
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleDay
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleDayOfYear
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleMonth
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleYear
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/InstrumentName
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/OrbitNumber
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/OrbitPeriod
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/PGEVersion
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/Period
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/ProcessLevel
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/StartUTC
attribute /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/TAI93At0zOfGranule
...

There can be multiple objects or attributes with the same name in a file. How can you make sure you are finding the correct object or attribute? You can first determine how many attributes there are with a specified name, and then examine the paths to them.

The -N option can be used to display all objects or attributes with a given name. For example, there are four attributes with the name ScaleFactor in the OMI-Aura.he5 file, as can be seen below with the -N option:

h5dump -N ScaleFactor OMI-Aura.he5

It outputs:

HDF5 "OMI-Aura.he5" {
ATTRIBUTE "ScaleFactor" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
DATA {
(0): 1
}
}
ATTRIBUTE "ScaleFactor" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
DATA {
(0): 1
}
}
ATTRIBUTE "ScaleFactor" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
DATA {
(0): 1
}
}
ATTRIBUTE "ScaleFactor" {
DATATYPE H5T_IEEE_F64LE
DATASPACE SIMPLE { ( 1 ) / ( 1 ) }
DATA {
(0): 1
}
}
}

h5ls

If you include the -v (verbose) option for h5ls, you will see all of the attributes for the specified file, dataset or group. You cannot display individual attributes.

Dataset Subset

h5dump

If you have a very large dataset, you may wish to subset or see just a portion of the dataset. This can be done with the following h5dump options.

Option Description
-d D, –dataset=D Dataset D
-s START, –start=START Offset or start of subsetting selection
-S STRIDE, –stride=STRIDE Stride (sampling along a dimension). The default (unspecified, or 1) selects every element along a dimension, a value of 2 selects every other element, a value of 3 selects every third element, ...
-c COUNT, –count=COUNT Number of blocks to include in the selection
-k BLOCK, –block=BLOCK Size of the block in a hyperslab. The default (unspecified, or 1) is for the block size to be the size of a single element.

The START (s), STRIDE (S), COUNT (c), and BLOCK (k) options define the shape and size of the selection. They are arrays with the same number of dimensions as the rank of the dataset's dataspace, and they all work together to define the selection. A change to one of these arrays can affect the others.

When specifying these h5dump options, a comma is used as the delimiter for each dimension in the option value. For example, with a 2-dimensional dataset, the option value is specified as "H,W", where H is the height and W is the width. If the offset is 0 for both dimensions, then START would be specified as follows:

-s "0,0"

There is also a shorthand way to specify these options with brackets at the end of the dataset name:

-d DATASETNAME[s;S;c;k]

Multiple dimensions are separated by commas. For example, a subset for a 2-dimensional dataset would be specified as follows:

-d DATASETNAME[s,s;S,S;c,c;k,k]

For a detailed understanding of how selections works, see the H5Sselect_hyperslab API in the HDF5 Reference Manual.

The dataset SolarZenithAngle in the OMI-Aura.he5 file can be used to illustrate these options. This dataset is a 2-dimensional dataset of size 720 (height) x 1440 (width). Too much data will be displayed by simply viewing the specified dataset with the -d option:

h5dump -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" OMI-Aura.he5

Subsetting narrows down the output that is displayed. In the following example, the first 15x10 elements (-c "15,10") are specified, beginning with position (0,0) (-s "0,0"):

h5dump -A 0 -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" -s "0,0" -c "15,10" -w 0 OMI-Aura.he5

If using the shorthand method, specify:

h5dump -A 0 -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle[0,0;;15,10;]" -w 0 OMI-Aura.he5

Where, the -d option must be specified before subsetting options (if not using the shorthand method).

The -A 0 option suppresses the printing of attributes.

The -w 0 option sets the number of columns of output to the maximum allowed value (65535). This ensures that there are enough columns specified for displaying the data.

Either command displays:

HDF5 "OMI-Aura.he5" {
DATASET "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" {
DATATYPE H5T_IEEE_F32LE
DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
SUBSET {
START ( 0, 0 );
STRIDE ( 1, 1 );
COUNT ( 15, 10 );
BLOCK ( 1, 1 );
DATA {
(0,0): 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403,
(1,0): 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071,
(2,0): 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867,
(3,0): 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632,
(4,0): 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429,
(5,0): 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225,
(6,0): 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021,
(7,0): 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715,
(8,0): 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511,
(9,0): 77.658, 77.658, 77.658, 77.307, 77.307, 77.307, 77.307, 77.307, 77.307, 77.307,
(10,0): 77.556, 77.556, 77.556, 77.556, 77.556, 77.556, 77.556, 77.556, 77.102, 77.102,
(11,0): 78.408, 78.408, 78.408, 78.408, 78.408, 78.408, 78.408, 78.408, 77.102, 77.102,
(12,0): 76.34, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413,
(13,0): 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 77.195,
(14,0): 78.005, 78.005, 78.005, 78.005, 78.005, 78.005, 76.991, 76.991, 76.991, 76.991
}
}
}
}

What if we wish to read three rows of three elements at a time (-c "3,3"), where each element is a 2 x 3 block (-k "2,3") and we wish to begin reading from the second row (-s "1,0")?

You can do that with the following command:

h5dump -A 0 -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle"
-s "1,0" -S "2,3" -c "3,3" -k "2,3" -w 0 OMI-Aura.he5

In this case, the stride must be specified as 2 by 3 (or larger) to accommodate the reading of 2 by 3 blocks. If it is smaller, the command will fail with the error,

h5dump error: wrong subset selection; blocks overlap.

The output of the above command is shown below:

HDF5 "OMI-Aura.he5" {
DATASET "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" {
DATATYPE H5T_IEEE_F32LE
DATASPACE SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
SUBSET {
START ( 1, 0 );
STRIDE ( 2, 3 );
COUNT ( 3, 3 );
BLOCK ( 2, 3 );
DATA {
(1,0): 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071,
(2,0): 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867,
(3,0): 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632,
(4,0): 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429,
(5,0): 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225,
(6,0): 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021
}
}
}
}

Datatypes

h5dump

The following datatypes are discussed, using the output of h5dump with HDF5 files from the Examples by API page:

Array

Users have been confused by the difference between an Array datatype (H5T_ARRAY) and a dataset that (has a dataspace that) is an array.

Typically, these users want a dataset that has a simple datatype (like integer or float) that is an array, like the following dataset /DS1. It has a datatype of H5T_STD_I32LE (32-bit Little-Endian Integer) and is a 4 by 7 array:

$ h5dump h5ex_d_rdwr.h5
HDF5 "h5ex_d_rdwr.h5" {
GROUP "/" {
DATASET "DS1" {
DATATYPE H5T_STD_I32LE
DATASPACE SIMPLE { ( 4, 7 ) / ( 4, 7 ) }
DATA {
(0,0): 0, -1, -2, -3, -4, -5, -6,
(1,0): 0, 0, 0, 0, 0, 0, 0,
(2,0): 0, 1, 2, 3, 4, 5, 6,
(3,0): 0, 2, 4, 6, 8, 10, 12
}
}
}
}

Contrast that with the following dataset that has both an Array datatype and is an array:

$ h5dump h5ex_t_array.h5
HDF5 "h5ex_t_array.h5" {
GROUP "/" {
DATASET "DS1" {
DATATYPE H5T_ARRAY { [3][5] H5T_STD_I64LE }
DATASPACE SIMPLE { ( 4 ) / ( 4 ) }
DATA {
(0): [ 0, 0, 0, 0, 0,
0, -1, -2, -3, -4,
0, -2, -4, -6, -8 ],
(1): [ 0, 1, 2, 3, 4,
1, 1, 1, 1, 1,
2, 1, 0, -1, -2 ],
(2): [ 0, 2, 4, 6, 8,
2, 3, 4, 5, 6,
4, 4, 4, 4, 4 ],
(3): [ 0, 3, 6, 9, 12,
3, 5, 7, 9, 11,
6, 7, 8, 9, 10 ]
}
}
}
}
@ H5T_ARRAY
Definition H5Tpublic.h:42
#define H5T_STD_I64LE
Definition H5Tpublic.h:333

In this file, dataset /DS1 has a datatype of

and it also has a dataspace of

SIMPLE { ( 4 ) / ( 4 ) }

In other words, it is an array of four elements, in which each element is a 3 by 5 array of H5T_STD_I64LE.

This dataset is much more complex. Also note that subsetting cannot be done on Array datatypes.

See this section for more information on the Array datatype.

New References

References were reworked in HDF5 1.12.0. The new reference datatype is H5T_STD_REF. The old reference datatypes are deprecated. see HDF5 References.

Object Reference

An Object Reference is a reference to an entire object (attribute, dataset, group, or named datatype). A dataset with an Object Reference datatype consists of one or more Object References. An Object Reference dataset can be used as an index to an HDF5 file.

The /DS1 dataset in the following file (h5ex_t_objref.h5) is an Object Reference dataset. It contains two references, one to group /G1 and the other to dataset /DS2:

$ h5dump h5ex_t_objref.h5
HDF5 "h5ex_t_objref.h5" {
GROUP "/" {
DATASET "DS1" {
DATASPACE SIMPLE { ( 2 ) / ( 2 ) }
DATA {
GROUP "h5ex_t_objref.h5/G1"
DATASET "h5ex_t_objref.h5/DS2"
DATA {
}
}
}
DATASET "DS2" {
DATATYPE H5T_STD_I32LE
DATASPACE NULL
DATA {
}
}
GROUP "G1" {
}
}
}
@ H5T_REFERENCE
Definition H5Tpublic.h:39
#define H5T_STD_REF
Definition H5Tpublic.h:428

Region Reference

A Region Reference is a reference to a selection within a dataset. A selection can be either individual elements or a hyperslab. In h5dump you will see the name of the dataset along with the elements or slab that is selected. A dataset with a Region Reference datatype consists of one or more Region References.

An example of a Region Reference dataset (h5ex_t_regref.h5) can be found on the Examples by API page, under Datatypes. If you examine this dataset with h5dump you will see that /DS1 is a Region Reference dataset as indicated by its datatype, highlighted in bold below:

$ h5dump h5ex_t_regref.h5
HDF5 "h5ex_t_regref.h5" {
GROUP "/" {
DATASET "DS1" {
DATASPACE SIMPLE { ( 2 ) / ( 2 ) }
DATA {
DATASET "h5ex_t_regref.h5/DS2"{
REGION_TYPE POINT (0,1), (2,11), (1,0), (2,4)
DATATYPE H5T_STD_I8LE
DATASPACE SIMPLE { ( 3, 16 ) / ( 3, 16 ) }
}
DATASET "h5ex_t_regref.h5/DS2" {
REGION_TYPE BLOCK (0,0)-(0,2), (0,11)-(0,13), (2,0)-(2,2),
(2,11)-(2,13)
DATATYPE H5T_STD_I8LE
DATASPACE SIMPLE { ( 3, 16 ) / ( 3, 16 ) }
}
}
}
DATASET "DS2" {
DATATYPE H5T_STD_I8LE
DATASPACE SIMPLE { ( 3, 16 ) / ( 3, 16 ) }
DATA {
(0,0): 84, 104, 101, 32, 113, 117, 105, 99, 107, 32, 98, 114, 111, 119,
(0,14): 110, 0,
(1,0): 102, 111, 120, 32, 106, 117, 109, 112, 115, 32, 111, 118, 101,
(1,13): 114, 32, 0,
(2,0): 116, 104, 101, 32, 53, 32, 108, 97, 122, 121, 32, 100, 111, 103,
(2,14): 115, 0
}
}
}
}
#define H5T_STD_I8LE
Definition H5Tpublic.h:303

It contains two Region References:

  • A selection of four individual elements in dataset /DS2 : (0,1), (2,11), (1,0), (2,4) See the H5Sselect_elements API in the HDF5 User Guide for information on selecting individual elements.
  • A selection of these blocks in dataset /DS2 : (0,0)-(0,2), (0,11)-(0,13), (2,0)-(2,2), (2,11)-(2,13) See the H5Sselect_hyperslab API in the HDF5 User Guide for how to do hyperslab selection.

If you look at the code that creates the dataset (h5ex_t_regref.c) you will see that the first reference is created with these calls:

status = H5Sselect_elements (space, H5S_SELECT_SET, 4, coords[0]);
status = H5Rcreate_region(file, DATASET2, space, H5P_DEFAULT, &wdata[0]);
#define H5P_DEFAULT
Definition H5Ppublic.h:228
@ H5S_SELECT_SET
Definition H5Spublic.h:87
herr_t H5Rcreate_region(hid_t loc_id, const char *name, hid_t space_id, hid_t oapl_id, H5R_ref_t *ref_ptr)
Creates a region reference.
herr_t H5Sselect_elements(hid_t space_id, H5S_seloper_t op, size_t num_elem, const hsize_t *coord)
Selects array elements to be included in the selection for a dataspace.

where the buffer containing the coordinates to select is:

coords[4][2] = { {0, 1},
{2, 11},
{1, 0},
{2, 4} },

The second reference is created by calling,

status = H5Sselect_hyperslab (space, H5S_SELECT_SET, start, stride, count, block);
status = H5Rcreate_region(file, DATASET2, space, H5P_DEFAULT, &wdata[1]);
herr_t H5Sselect_hyperslab(hid_t space_id, H5S_seloper_t op, const hsize_t start[], const hsize_t stride[], const hsize_t count[], const hsize_t block[])
Selects a hyperslab region to add to the current selected region.

where start, stride, count, and block have these values:

start[2] = {0, 0},
stride[2] = {2, 11},
count[2] = {2, 2},
block[2] = {1, 3};

These start, stride, count, and block values will select the elements shown in bold in the dataset:

84 104 101 32 113 117 105 99 107 32 98 114 111 119 110 0
102 111 120 32 106 117 109 112 115 32 111 118 101 114 32 0
116 104 101 32 53 32 108 97 122 121 32 100 111 103 115 0

If you use h5dump to select a subset of dataset /DS2 with these start, stride, count, and block values, you will see that the same elements are selected:

$ h5dump -d "/DS2" -s "0,0" -S "2,11" -c "2,2" -k "1,3" h5ex_t_regref.h5
HDF5 "h5ex_t_regref.h5" {
DATASET "/DS2" {
DATATYPE H5T_STD_I8LE
DATASPACE SIMPLE { ( 3, 16 ) / ( 3, 16 ) }
SUBSET {
START ( 0, 0 );
STRIDE ( 2, 11 );
COUNT ( 2, 2 );
BLOCK ( 1, 3 );
DATA {
(0,0): 84, 104, 101, 114, 111, 119,
(2,0): 116, 104, 101, 100, 111, 103
}
}
}
}

NOTE that you must release the references created in the code with the H5Rdestroy API.

status = H5Rdestroy(&wdata[0]);
status = H5Rdestroy(&wdata[1]);
herr_t H5Rdestroy(H5R_ref_t *ref_ptr)
Closes a reference.

For more information on selections, see the tutorial topic on Reading From or Writing To a Subset of a Dataset. Also see the Dataset Subset tutorial topic on using h5dump to view a subset.

Deprecated Object Reference

An Object Reference is a reference to an entire object (dataset, group, or named datatype). A dataset with an Object Reference datatype consists of one or more Object References. An Object Reference dataset can be used as an index to an HDF5 file.

The /DS1 dataset in the following file (h5ex_t_objref.h5) is an Object Reference dataset. It contains two references, one to group /G1 and the other to dataset /DS2:

$ h5dump h5ex_t_objref.h5
HDF5 "h5ex_t_objref.h5" {
GROUP "/" {
DATASET "DS1" {
DATATYPE H5T_REFERENCE { H5T_STD_REF_OBJECT }
DATASPACE SIMPLE { ( 2 ) / ( 2 ) }
DATA {
(0): GROUP 1400 /G1 , DATASET 800 /DS2
}
}
DATASET "DS2" {
DATATYPE H5T_STD_I32LE
DATASPACE NULL
DATA {
}
}
GROUP "G1" {
}
}
}

Deprecated Region Reference

A Region Reference is a reference to a selection within a dataset. A selection can be either individual elements or a hyperslab. In h5dump you will see the name of the dataset along with the elements or slab that is selected. A dataset with a Region Reference datatype consists of one or more Region References.

An example of a Region Reference dataset (h5ex_t_regref.h5) can be found on the Examples by API page, under Datatypes. If you examine this dataset with h5dump you will see that /DS1 is a Region Reference dataset as indicated by its datatype, highlighted in bold below:

$ h5dump h5ex_t_regref.h5
HDF5 "h5ex_t_regref.h5" {
GROUP "/" {
DATASET "DS1" {
DATASPACE SIMPLE { ( 2 ) / ( 2 ) }
DATA {
DATASET /DS2 {(0,1), (2,11), (1,0), (2,4)},
DATASET /DS2 {(0,0)-(0,2), (0,11)-(0,13), (2,0)-(2,2), (2,11)-(2,13)}
}
}
DATASET "DS2" {
DATATYPE H5T_STD_I8LE
DATASPACE SIMPLE { ( 3, 16 ) / ( 3, 16 ) }
DATA {
(0,0): 84, 104, 101, 32, 113, 117, 105, 99, 107, 32, 98, 114, 111, 119,
(0,14): 110, 0,
(1,0): 102, 111, 120, 32, 106, 117, 109, 112, 115, 32, 111, 118, 101,
(1,13): 114, 32, 0,
(2,0): 116, 104, 101, 32, 53, 32, 108, 97, 122, 121, 32, 100, 111, 103,
(2,14): 115, 0
}
}
}
}
#define H5T_STD_REF_DSETREG
Definition H5Tpublic.h:423

It contains two Region References:

  • A selection of four individual elements in dataset /DS2 : (0,1), (2,11), (1,0), (2,4) See the H5Sselect_elements API in the HDF5 User Guide for information on selecting individual elements.
  • A selection of these blocks in dataset /DS2 : (0,0)-(0,2), (0,11)-(0,13), (2,0)-(2,2), (2,11)-(2,13) See the H5Sselect_hyperslab API in the HDF5 User Guide for how to do hyperslab selection.

If you look at the code that creates the dataset (h5ex_t_regref.c) you will see that the first reference is created with these calls:

status = H5Sselect_elements (space, H5S_SELECT_SET, 4, coords[0]);
status = H5Rcreate (&wdata[0], file, DATASET2, H5R_DATASET_REGION, space);
#define H5R_DATASET_REGION
Definition H5Rpublic.h:625
herr_t H5Rcreate(void *ref, hid_t loc_id, const char *name, H5R_type_t ref_type, hid_t space_id)
Creates a reference.

where the buffer containing the coordinates to select is:

coords[4][2] = { {0, 1},
{2, 11},
{1, 0},
{2, 4} },

The second reference is created by calling,

status = H5Sselect_hyperslab (space, H5S_SELECT_SET, start, stride, count, block);
status = H5Rcreate (&wdata[1], file, DATASET2, H5R_DATASET_REGION, space);

where start, stride, count, and block have these values:

start[2] = {0, 0},
stride[2] = {2, 11},
count[2] = {2, 2},
block[2] = {1, 3};

These start, stride, count, and block values will select the elements shown in bold in the dataset:

84 104 101 32 113 117 105 99 107 32 98 114 111 119 110 0
102 111 120 32 106 117 109 112 115 32 111 118 101 114 32 0
116 104 101 32 53 32 108 97 122 121 32 100 111 103 115 0

If you use h5dump to select a subset of dataset /DS2 with these start, stride, count, and block values, you will see that the same elements are selected:

$ h5dump -d "/DS2" -s "0,0" -S "2,11" -c "2,2" -k "1,3" h5ex_t_regref.h5
HDF5 "h5ex_t_regref.h5" {
DATASET "/DS2" {
DATATYPE H5T_STD_I8LE
DATASPACE SIMPLE { ( 3, 16 ) / ( 3, 16 ) }
SUBSET {
START ( 0, 0 );
STRIDE ( 2, 11 );
COUNT ( 2, 2 );
BLOCK ( 1, 3 );
DATA {
(0,0): 84, 104, 101, 114, 111, 119,
(2,0): 116, 104, 101, 100, 111, 103
}
}
}
}

For more information on selections, see the tutorial topic on Reading From or Writing To a Subset of a Dataset. Also see the Dataset Subset tutorial topic on using h5dump to view a subset.

String

There are two types of string data, fixed length strings and variable length strings.

Below is the h5dump output for two files that have the same strings written to them. In one file, the strings are fixed in length, and in the other, the strings have different sizes (and are variable in size).

Dataset of Fixed Length Strings

HDF5 "h5ex_t_string.h5" {
GROUP "/" {
DATASET "DS1" {
DATATYPE H5T_STRING {
STRSIZE 7;
CTYPE H5T_C_S1;
}
DATASPACE SIMPLE { ( 4 ) / ( 4 ) }
DATA {
(0): "Parting", "is such", "sweet ", "sorrow."
}
}
}
}
@ H5T_STR_SPACEPAD
Definition H5Tpublic.h:122

Dataset of Variable Length Strings

HDF5 "h5ex_t_vlstring.h5" {
GROUP "/" {
DATASET "DS1" {
DATATYPE H5T_STRING {
STRSIZE H5T_VARIABLE;
CTYPE H5T_C_S1;
}
DATASPACE SIMPLE { ( 4 ) / ( 4 ) }
DATA {
(0): "Parting", "is such", "sweet", "sorrow."
}
}
}
}
#define H5T_VARIABLE
Definition H5Tpublic.h:207

You might wonder which to use. Some comments to consider are included below.

  • In general, a variable length string dataset is more complex than a fixed length string. If you don't specifically need a variable length type, then just use the fixed length string.
  • A variable length dataset consists of pointers to heaps in different locations in the file. For this reason, a variable length dataset cannot be compressed. (Basically, the pointers get compressed and not the actual data!) If compression is needed, then do not use variable length types.
  • If you need to create an array of of different length strings, you can either use fixed length strings along with compression, or use a variable length string.

Navigate back: Main / Getting Started with HDF5 / Command-line Tools