tilelang.jit.adapter.cython.adapter¶

The profiler and convert to torch utils

Attributes¶

Classes¶

CythonKernelAdapter

Adapter class that converts TVM/TIR functions to callable CUDA kernels using ctypes.

Functions¶

get_cython_compiler()

Return the path to the Cython compiler.

get_cache_dir()

Get the cache directory for the current Python version.

get_cached_lib(source_code)

Try to load cached library or return None if not found.

Module Contents¶

tilelang.jit.adapter.cython.adapter.logger¶
tilelang.jit.adapter.cython.adapter.get_cython_compiler()¶

Return the path to the Cython compiler.

Returns:

out – The path to the Cython compiler, or None if none was found.

Return type:

Optional[str]

tilelang.jit.adapter.cython.adapter.get_cache_dir()¶

Get the cache directory for the current Python version.

Return type:

pathlib.Path

tilelang.jit.adapter.cython.adapter.get_cached_lib(source_code)¶

Try to load cached library or return None if not found.

Parameters:

source_code (str)

Return type:

Tuple[Optional[ctypes.CDLL], pathlib.Path]

tilelang.jit.adapter.cython.adapter.current_dir¶
tilelang.jit.adapter.cython.adapter.cython_wrapper_path¶
tilelang.jit.adapter.cython.adapter.cython_wrapper_code¶
class tilelang.jit.adapter.cython.adapter.CythonKernelAdapter(params, result_idx, target, func_or_mod, host_mod=None, device_mod=None, kernel_global_source=None, verbose=False, pass_configs=None, compile_flags=None)¶

Bases: tilelang.jit.adapter.base.BaseKernelAdapter

Adapter class that converts TVM/TIR functions to callable CUDA kernels using ctypes.

This adapter handles: 1. Converting TIR functions to compiled CUDA libraries 2. Managing dynamic shapes in tensor operations 3. Wrapping C++ kernels for Python/PyTorch usage

Parameters:
  • params (List[tilelang.engine.param.KernelParam])

  • result_idx (List[int])

  • target (Union[str, tvm.target.Target])

  • func_or_mod (Union[tvm.tir.PrimFunc, tilelang.tvm.IRModule])

  • host_mod (Optional[tilelang.tvm.IRModule])

  • device_mod (Optional[tilelang.tvm.IRModule])

  • kernel_global_source (Optional[str])

  • verbose (bool)

  • pass_configs (Optional[Dict[str, Any]])

  • compile_flags (Optional[List[str]])

target: str | tvm.target.Target = 'cuda'¶
ir_module: tilelang.tvm.IRModule | None = None¶
kernel_global_source: str | None = None¶
lib: ctypes.CDLL | None = None¶
wrapped_source: str | None = None¶
dynamic_symbolic_map: Dict[tvm.tir.Var, Tuple[int, int]] | None = None¶
ptr_map: Dict[int, str] | None = None¶
buffer_dtype_map: Dict[tvm.tir.Var, Tuple[int, torch.dtype]] | None = None¶
static_shape_map: Dict[tvm.tir.Var, Tuple[int, List[Tuple[int, int]]]] | None = None¶
static_strides_map: Dict[tvm.tir.Var, Tuple[int, List[Tuple[int, int]]]] | None = None¶
static_contiguous_list: List[tvm.tir.Var] | None = None¶
buffer_device_map: Dict[tvm.tir.Var, Tuple[int, torch.device]] | None = None¶
pass_configs: Dict[str, Any] | None = None¶
params¶
result_idx¶
verbose = False¶
wrapper¶
lib_generator¶
cython_wrapper¶
classmethod from_database(params, result_idx, target, func_or_mod, kernel_global_source, kernel_lib_path, verbose=False, pass_configs=None, compile_flags=None)¶
Parameters:
  • params (List[tvm.relax.TensorType])

  • result_idx (List[int])

  • target (str)

  • func_or_mod (Union[tvm.tir.PrimFunc, tilelang.tvm.IRModule])

  • kernel_global_source (str)

  • kernel_lib_path (str)

  • verbose (bool)

  • pass_configs (Optional[Dict[str, Any]])

  • compile_flags (Optional[List[str]])

property prim_func: tvm.tir.PrimFunc¶

Returns the primary TIR function from the IR module.

Return type:

tvm.tir.PrimFunc

property srcpath¶

Returns the source path of the compiled library.

property libpath¶

Returns the path to the compiled library.

property lib_code¶

Returns the code of the compiled library.

property is_dynamic¶

Indicates whether the kernel handles dynamic shapes.

get_kernel_source(kernel_only=False)¶

Returns the source code of the compiled kernel.

Parameters:

kernel_only (bool)