hipMM Python API

Contents

hipMM Python API#

2025-08-01

26 min read time

Applies to Linux

Module Contents#

class hipmm.DeviceBuffer#

Bases: object

capacity(self) size_t#
copy(self)#

Returns a copy of DeviceBuffer.

Returns#

A deep copy of existing DeviceBuffer

Examples#

>>> import rmm
>>> db = rmm.DeviceBuffer.to_device(b"abc")
>>> db_copy = db.copy()
>>> db.copy_to_host()
array([97, 98, 99], dtype=uint8)
>>> db_copy.copy_to_host()
array([97, 98, 99], dtype=uint8)
>>> assert db is not db_copy
>>> assert db.ptr != db_copy.ptr
copy_from_device(self, cuda_ary, Stream stream=DEFAULT_STREAM)#

Copy from a buffer on host to self

Parameters#

cuda_ary : object to copy from that has __cuda_array_interface__ stream : CUDA stream to use for copying, default the default stream

Examples#

>>> import rmm
>>> db = rmm.DeviceBuffer(size=5)
>>> db2 = rmm.DeviceBuffer.to_device(b"abc")
>>> db.copy_from_device(db2)
>>> hb = db.copy_to_host()
>>> print(hb)
array([97, 98, 99,  0,  0], dtype=uint8)
copy_from_host(self, ary, Stream stream=DEFAULT_STREAM)#

Copy from a buffer on host to self

Parameters#

ary : bytes-like buffer to copy from stream : CUDA stream to use for copying, default the default stream

Examples#

>>> import rmm
>>> db = rmm.DeviceBuffer(size=10)
>>> hb = b"abcdef"
>>> db.copy_from_host(hb)
>>> hb = db.copy_to_host()
>>> print(hb)
array([97, 98, 99,  0,  0,  0,  0,  0,  0,  0], dtype=uint8)
copy_to_host(self, ary=None, Stream stream=DEFAULT_STREAM)#

Copy from a DeviceBuffer to a buffer on host.

Parameters#

ary : bytes-like buffer to write into stream : CUDA stream to use for copying, default the default stream

Examples#

>>> import rmm
>>> db = rmm.DeviceBuffer.to_device(b"abc")
>>> hb = bytearray(db.nbytes)
>>> db.copy_to_host(hb)
>>> print(hb)
bytearray(b'abc')
>>> hb = db.copy_to_host()
>>> print(hb)
bytearray(b'abc')
nbytes#

Gets the size of the buffer in bytes.

prefetch(self, device=None, stream=None)#

Prefetch buffer data to the specified device on the specified stream.

Assumes the storage for this DeviceBuffer is CUDA managed memory (unified memory). If it is not, this function is a no-op.

Parameters#

deviceoptional

The CUDA device to which to prefetch the memory for this buffer. Defaults to the current CUDA device. To prefetch to the CPU, pass cudaCpuDeviceId as the device.

streamoptional

CUDA stream to use for prefetching. Defaults to self.stream

ptr#

Gets a pointer to the underlying data.

reserve(self, size_t new_capacity, Stream stream=DEFAULT_STREAM) void#
resize(self, size_t new_size, Stream stream=DEFAULT_STREAM) void#
size#

Gets the size of the buffer in bytes.

static to_device(const unsigned char[::1] b, Stream stream=DEFAULT_STREAM)#

Calls to_device function on arguments provided.

tobytes(self, Stream stream=DEFAULT_STREAM) bytes#
exception hipmm.RMMError(errcode, msg)#

Bases: Exception

hipmm.disable_logging()#

Disable logging if it was enabled previously using rmm.initialize() or rmm.enable_logging().

hipmm.enable_logging(log_file_name=None)#

Enable logging of run-time events for all devices.

Parameters#

log_file_name: str, optional

Name of the log file. If not specified, the environment variable RMM_LOG_FILE is used. A ValueError is thrown if neither is available. A separate log file is produced for each device, and the suffix “.dev{id}” is automatically added to the log file name.

Notes#

Note that if you use the environment variable CUDA_VISIBLE_DEVICES with logging enabled, the suffix may not be what you expect. For example, if you set CUDA_VISIBLE_DEVICES=1, the log file produced will still have suffix 0. Similarly, if you set CUDA_VISIBLE_DEVICES=1,0 and use devices 0 and 1, the log file with suffix 0 will correspond to the GPU with device ID 1. Use rmm.get_log_filenames() to get the log file names corresponding to each device.

hipmm.flush_logger()#

Flush the debug logger. This will cause any buffered log messages to be written to the log file.

Debug logging prints messages to a log file. See Debug Logging for more information.

See Also#

set_flush_level : Set the flush level for the debug logger. get_flush_level : Get the current debug logging flush level.

Examples#

>>> import rmm
>>> rmm.flush_logger() # flush the logger
hipmm.get_flush_level()#

Get the current debug logging flush level for the RMM logger. Messages of this level or higher will automatically flush to the file.

Debug logging prints messages to a log file. See Debug Logging for more information.

Returns#

logging_level

The current flush level, an instance of the logging_level enum.

See Also#

set_flush_level : Set the flush level for the logger. flush_logger : Flush the logger.

Examples#

>>> import rmm
>>> rmm.flush_level() # get current flush level
<logging_level.INFO: 2>
hipmm.get_log_filenames()#

Returns the log filename (or None if not writing logs) for each device in use.

Examples#

>>> import rmm
>>> rmm.reinitialize(devices=[0, 1], logging=True, log_file_name="rmm.log")
>>> rmm.get_log_filenames()
{0: '/home/user/workspace/rapids/rmm/python/rmm.dev0.log',
 1: '/home/user/workspace/rapids/rmm/python/rmm.dev1.log'}
hipmm.get_logging_level()#

Get the current debug logging level.

Debug logging prints messages to a log file. See Debug Logging for more information.

Returns#

levellogging_level

The current debug logging level, an instance of the logging_level enum.

See Also#

set_logging_level : Set the debug logging level.

Examples#

>>> import rmm
>>> rmm.get_logging_level() # get current logging level
<logging_level.INFO: 2>
hipmm.is_initialized()#

Returns True if RMM has been initialized, False otherwise.

class hipmm.level_enum(*values)#

Bases: IntEnum

critical = 5#
debug = 1#
error = 4#
info = 2#
n_levels = 7#
off = 6#
trace = 0#
warn = 3#
hipmm.register_reinitialize_hook(func, *args, **kwargs)#

Add a function to the list of functions (“hooks”) that will be called before reinitialize().

A user or library may register hooks to perform any necessary cleanup before RMM is reinitialized. For example, a library with an internal cache of objects that use device memory allocated by RMM can register a hook to release those references before RMM is reinitialized, thus ensuring that the relevant device memory resource can be deallocated.

Hooks are called in the reverse order they are registered. This is useful, for example, when a library registers multiple hooks and needs them to run in a specific order for cleanup to be safe. Hooks cannot rely on being registered in a particular order relative to hooks registered by other packages, since that is determined by package import ordering.

Parameters#

funccallable

Function to be called before reinitialize()

args, kwargs

Positional and keyword arguments to be passed to func

hipmm.reinitialize(pool_allocator=False, managed_memory=False, initial_pool_size=None, maximum_pool_size=None, devices=0, logging=False, log_file_name=None)#

Finalizes and then initializes RMM using the options passed. Using memory from a previous initialization of RMM is undefined behavior and should be avoided.

Parameters#

pool_allocatorbool, default False

If True, use a pool allocation strategy which can greatly improve performance.

managed_memorybool, default False

If True, use managed memory for device memory allocation

initial_pool_sizeint | str, default None

When pool_allocator is True, this indicates the initial pool size in bytes. By default, 1/2 of the total GPU memory is used. When pool_allocator is False, this argument is ignored if provided. A string argument is parsed using parse_bytes.

maximum_pool_sizeint | str, default None

When pool_allocator is True, this indicates the maximum pool size in bytes. By default, the total available memory on the GPU is used. When pool_allocator is False, this argument is ignored if provided. A string argument is parsed using parse_bytes.

devicesint or List[int], default 0

GPU device IDs to register. By default registers only GPU 0.

loggingbool, default False

If True, enable run-time logging of all memory events (alloc, free, realloc). This has a significant performance impact.

log_file_namestr

Name of the log file. If not specified, the environment variable RMM_LOG_FILE is used. A ValueError is thrown if neither is available. A separate log file is produced for each device, and the suffix “.dev{id}” is automatically added to the log file name.

Notes#

Note that if you use the environment variable CUDA_VISIBLE_DEVICES with logging enabled, the suffix may not be what you expect. For example, if you set CUDA_VISIBLE_DEVICES=1, the log file produced will still have suffix 0. Similarly, if you set CUDA_VISIBLE_DEVICES=1,0 and use devices 0 and 1, the log file with suffix 0 will correspond to the GPU with device ID 1. Use rmm.get_log_filenames() to get the log file names corresponding to each device.

hipmm.set_flush_level(level)#

Set the flush level for the debug logger. Messages of this level or higher will automatically flush to the file.

Debug logging prints messages to a log file. See Debug Logging for more information.

Parameters#

levellogging_level

The debug logging level. Valid values are instances of the logging_level enum.

Raises#

TypeError

If the logging level is not an instance of the logging_level enum.

See Also#

get_flush_level : Get the current debug logging flush level. flush_logger : Flush the logger.

Examples#

>>> import rmm
>>> rmm.flush_on(rmm.logging_level.WARN) # set flush level to warn
hipmm.set_logging_level(level)#

Set the debug logging level.

Debug logging prints messages to a log file. See Debug Logging for more information.

Parameters#

levellogging_level

The debug logging level. Valid values are instances of the logging_level enum.

Raises#

TypeError

If the logging level is not an instance of the logging_level enum.

See Also#

get_logging_level : Get the current debug logging level.

Examples#

>>> import rmm
>>> rmm.set_logging_level(rmm.logging_level.WARN) # set logging level to warn
hipmm.should_log(level)#

Check if a message at the given level would be logged.

A message at the given level would be logged if the current debug logging level is set to a level that is at least as verbose than the given level, and the RMM module is compiled for a logging level at least as verbose. If these conditions are not both met, this function will return false.

Debug logging prints messages to a log file. See Debug Logging for more information.

Parameters#

levellogging_level

The debug logging level. Valid values are instances of the logging_level enum.

Returns#

should_logbool

True if a message at the given level would be logged, False otherwise.

Raises#

TypeError

If the logging level is not an instance of the logging_level enum.

hipmm.unregister_reinitialize_hook(func)#

Remove func from list of hooks that will be called before reinitialize().

If func was registered more than once, every instance of it will be removed from the list of hooks.

Memory Resources#

class hipmm.mr.ArenaMemoryResource(DeviceMemoryResource upstream_mr, arena_size=None, bool dump_log_on_failure=False)#

Bases: UpstreamResourceAdaptor

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

get_upstream(self) DeviceMemoryResource#
upstream_mr#
class hipmm.mr.BinningMemoryResource(DeviceMemoryResource upstream_mr, int8_t min_size_exponent=-1, int8_t max_size_exponent=-1)#

Bases: UpstreamResourceAdaptor

add_bin(self, size_t allocation_size, DeviceMemoryResource bin_resource=None)#

Adds a bin of the specified maximum allocation size to this memory resource. If specified, uses bin_resource for allocation for this bin. If not specified, creates and uses a FixedSizeMemoryResource for allocation for this bin.

Allocations smaller than allocation_size and larger than the next smaller bin size will use this fixed-size memory resource.

Parameters#

allocation_sizesize_t

The maximum allocation size in bytes for the created bin

bin_resourceDeviceMemoryResource

The resource to use for this bin (optional)

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

bin_mrs#

BinningMemoryResource.bin_mrs: list Get the list of binned memory resources.

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

get_upstream(self) DeviceMemoryResource#
upstream_mr#
class hipmm.mr.CallbackMemoryResource(allocate_func, deallocate_func)#

Bases: DeviceMemoryResource

A memory resource that uses the user-provided callables to do memory allocation and deallocation.

CallbackMemoryResource should really only be used for debugging memory issues, as there is a significant performance penalty associated with using a Python function for each memory allocation and deallocation.

Parameters#

allocate_func: callable

The allocation function must accept two arguments. An integer representing the number of bytes to allocate and a Stream on which to perform the allocation, and return an integer representing the pointer to the allocated memory.

deallocate_func: callable

The deallocation function must accept three arguments. an integer representing the pointer to the memory to free, a second integer representing the number of bytes to free, and a Stream on which to perform the deallocation.

Examples#

>>> import rmm
>>> base_mr = rmm.mr.CudaMemoryResource()
>>> def allocate_func(size, stream):
...     print(f"Allocating {size} bytes")
...     return base_mr.allocate(size, stream)
...
>>> def deallocate_func(ptr, size, stream):
...     print(f"Deallocating {size} bytes")
...     return base_mr.deallocate(ptr, size, stream)
...
>>> rmm.mr.set_current_device_resource(
    rmm.mr.CallbackMemoryResource(allocate_func, deallocate_func)
)
>>> dbuf = rmm.DeviceBuffer(size=256)
Allocating 256 bytes
>>> del dbuf
Deallocating 256 bytes
allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

class hipmm.mr.CudaAsyncMemoryResource#

Bases: DeviceMemoryResource

Memory resource that uses cudaMallocAsync/cudaFreeAsync for allocation/deallocation.

Parameters#

initial_pool_sizeint | str, optional

Initial pool size in bytes. By default, half the available memory on the device is used. A string argument is parsed using parse_bytes.

release_threshold: int, optional

Release threshold in bytes. If the pool size grows beyond this value, unused memory held by the pool will be released at the next synchronization point.

enable_ipc: bool, optional

If True, enables export of POSIX file descriptor handles for the memory allocated by this resource so that it can be used with CUDA IPC.

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

class hipmm.mr.CudaMemoryResource#

Bases: DeviceMemoryResource

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

class hipmm.mr.DeviceMemoryResource#

Bases: object

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

class hipmm.mr.FailureCallbackResourceAdaptor(DeviceMemoryResource upstream_mr, callback)#

Bases: UpstreamResourceAdaptor

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

get_upstream(self) DeviceMemoryResource#
upstream_mr#
class hipmm.mr.FixedSizeMemoryResource(DeviceMemoryResource upstream_mr, size_t block_size=0x100000, size_t blocks_to_preallocate=128)#

Bases: UpstreamResourceAdaptor

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

get_upstream(self) DeviceMemoryResource#
upstream_mr#
class hipmm.mr.LimitingResourceAdaptor(DeviceMemoryResource upstream_mr, size_t allocation_limit)#

Bases: UpstreamResourceAdaptor

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

get_allocated_bytes(self) size_t#

Query the number of bytes that have been allocated. Note that this can not be used to know how large of an allocation is possible due to both possible fragmentation and also internal page sizes and alignment that is not tracked by this allocator.

get_allocation_limit(self) size_t#

Query the maximum number of bytes that this allocator is allowed to allocate. This is the limit on the allocator and not a representation of the underlying device. The device may not be able to support this limit.

get_upstream(self) DeviceMemoryResource#
upstream_mr#
class hipmm.mr.LoggingResourceAdaptor(DeviceMemoryResource upstream_mr, log_file_name=None)#

Bases: UpstreamResourceAdaptor

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

flush(self)#
get_file_name(self)#
get_upstream(self) DeviceMemoryResource#
upstream_mr#
class hipmm.mr.ManagedMemoryResource#

Bases: DeviceMemoryResource

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

class hipmm.mr.PoolMemoryResource(DeviceMemoryResource upstream_mr, initial_pool_size=None, maximum_pool_size=None)#

Bases: UpstreamResourceAdaptor

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

get_upstream(self) DeviceMemoryResource#
pool_size(self)#
upstream_mr#
class hipmm.mr.PrefetchResourceAdaptor(DeviceMemoryResource upstream_mr)#

Bases: UpstreamResourceAdaptor

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

get_upstream(self) DeviceMemoryResource#
upstream_mr#
class hipmm.mr.SamHeadroomMemoryResource(size_t headroom)#

Bases: DeviceMemoryResource

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

class hipmm.mr.StatisticsResourceAdaptor(DeviceMemoryResource upstream_mr)#

Bases: UpstreamResourceAdaptor

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

allocation_counts#

StatisticsResourceAdaptor.allocation_counts: Statistics

Gets the current, peak, and total allocated bytes and number of allocations.

The dictionary keys are current_bytes, current_count, peak_bytes, peak_count, total_bytes, and total_count.

Returns:

dict: Dictionary containing allocation counts and bytes.

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

get_upstream(self) DeviceMemoryResource#
pop_counters(self) Statistics#

Pop a counter pair (bytes and allocations) from the stack

Returns#

The popped statistics

push_counters(self) Statistics#

Push a new counter pair (bytes and allocations) on the stack

Returns#

The statistics _before_ the push

upstream_mr#
class hipmm.mr.SystemMemoryResource#

Bases: DeviceMemoryResource

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

class hipmm.mr.TrackingResourceAdaptor(DeviceMemoryResource upstream_mr, bool capture_stacks=False)#

Bases: UpstreamResourceAdaptor

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

get_allocated_bytes(self) size_t#

Query the number of bytes that have been allocated. Note that this can not be used to know how large of an allocation is possible due to both possible fragmentation and also internal page sizes and alignment that is not tracked by this allocator.

get_outstanding_allocations_str(self) str#

Returns a string containing information about the current outstanding allocations. For each allocation, the address, size and optional stack trace are shown.

get_upstream(self) DeviceMemoryResource#
log_outstanding_allocations(self)#

Logs the output of get_outstanding_allocations_str to the current RMM log file if enabled.

upstream_mr#
class hipmm.mr.UpstreamResourceAdaptor#

Bases: DeviceMemoryResource

Parent class for all memory resources that track an upstream.

Upstream resource tracking requires maintaining a reference to the upstream mr so that it is kept alive and may be accessed by any downstream resource adaptors.

allocate(self, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Allocate nbytes bytes of memory.

Parameters#

nbytessize_t

The size of the allocation in bytes

streamStream

Optional stream for the allocation

deallocate(self, uintptr_t ptr, size_t nbytes, Stream stream=DEFAULT_STREAM)#

Deallocate memory pointed to by ptr of size nbytes.

Parameters#

ptruintptr_t

Pointer to be deallocated

nbytessize_t

Size of the allocation in bytes

streamStream

Optional stream for the deallocation

get_upstream(self) DeviceMemoryResource#
upstream_mr#
hipmm.mr.available_device_memory()#

Returns a tuple of free and total device memory memory.

hipmm.mr.disable_logging()#

Disable logging if it was enabled previously using rmm.initialize() or rmm.enable_logging().

hipmm.mr.enable_logging(log_file_name=None)#

Enable logging of run-time events for all devices.

Parameters#

log_file_name: str, optional

Name of the log file. If not specified, the environment variable RMM_LOG_FILE is used. A ValueError is thrown if neither is available. A separate log file is produced for each device, and the suffix “.dev{id}” is automatically added to the log file name.

Notes#

Note that if you use the environment variable CUDA_VISIBLE_DEVICES with logging enabled, the suffix may not be what you expect. For example, if you set CUDA_VISIBLE_DEVICES=1, the log file produced will still have suffix 0. Similarly, if you set CUDA_VISIBLE_DEVICES=1,0 and use devices 0 and 1, the log file with suffix 0 will correspond to the GPU with device ID 1. Use rmm.get_log_filenames() to get the log file names corresponding to each device.

hipmm.mr.get_current_device_resource() DeviceMemoryResource#

Get the memory resource used for RMM device allocations on the current device.

If the returned memory resource is used when a different device is the active CUDA device, behavior is undefined.

hipmm.mr.get_current_device_resource_type()#

Get the memory resource type used for RMM device allocations on the current device.

hipmm.mr.get_log_filenames()#

Returns the log filename (or None if not writing logs) for each device in use.

Examples#

>>> import rmm
>>> rmm.reinitialize(devices=[0, 1], logging=True, log_file_name="rmm.log")
>>> rmm.get_log_filenames()
{0: '/home/user/workspace/rapids/rmm/python/rmm.dev0.log',
 1: '/home/user/workspace/rapids/rmm/python/rmm.dev1.log'}
hipmm.mr.get_per_device_resource(int device)#

Get the default memory resource for the specified device.

If the returned memory resource is used when a different device is the active CUDA device, behavior is undefined.

Parameters#

deviceint

The ID of the device for which to get the memory resource.

hipmm.mr.get_per_device_resource_type(int device)#

Get the memory resource type used for RMM device allocations on the specified device.

Parameters#

deviceint

The device ID

hipmm.mr.is_initialized()#

Check whether RMM is initialized

hipmm.mr.set_current_device_resource(DeviceMemoryResource mr)#

Set the default memory resource for the current device.

Parameters#

mrDeviceMemoryResource

The memory resource to set. Must have been created while the current device is the active CUDA device.

hipmm.mr.set_per_device_resource(int device, DeviceMemoryResource mr)#

Set the default memory resource for the specified device.

Parameters#

deviceint

The ID of the device for which to get the memory resource.

mrDeviceMemoryResource

The memory resource to set. Must have been created while device was the active CUDA device.

Memory Allocators#

hipmm.allocators.cupy.rmm_cupy_allocator(nbytes)#

A CuPy allocator that makes use of RMM.

Examples#

>>> from rmm.allocators.cupy import rmm_cupy_allocator
>>> import cupy
>>> cupy.cuda.set_allocator(rmm_cupy_allocator)
class hipmm.allocators.numba.RMMNumbaManager(*args, **kwargs)#

Bases: HostOnlyHIPMemoryManager

External Memory Management Plugin implementation for Numba. Provides on-device allocation only.

See https://numba.readthedocs.io/en/stable/cuda/external-memory.html for details of the interface being implemented here.

defer_cleanup()#

Returns a context manager that disables cleanup of mapped or pinned host memory in the current context whilst it is active.

EMM Plugins that override this method must obtain the context manager from this method before yielding to ensure that cleanup of host allocations is also deferred.

get_ipc_handle(memory)#

Get an IPC handle for the MemoryPointer memory with offset modified by the RMM memory pool.

get_memory_info()#

Returns (free, total) memory in bytes in the context.

This implementation raises NotImplementedError because the allocation will be performed using rmm’s currently set default mr, which may be a pool allocator.

initialize()#

Perform any initialization required for the EMM plugin instance to be ready to use.

Returns:

None

property interface_version#

Returns an integer specifying the version of the EMM Plugin interface supported by the plugin implementation. Should always return 1 for implementations of this version of the specification.

memalloc(size)#

Allocate an on-device array from the RMM pool.

memallocmanaged(size, attach_global)#
memhostalloc(size, mapped=False, portable=False, wc=False)#

Implements the allocation of pinned host memory.

It is recommended that this method is not overridden by EMM Plugin implementations - instead, use the numba.cuda.BaseCUDAMemoryManager.

mempin(owner, pointer, size, mapped=False)#

Implements the pinning of host memory.

It is recommended that this method is not overridden by EMM Plugin implementations - instead, use the numba.cuda.BaseCUDAMemoryManager.

reset()#

Clears up all host memory (mapped and/or pinned) in the current context.

EMM Plugins that override this method must call super().reset() to ensure that host allocations are also cleaned up.

Memory Statistics#