onxxruntime使用cuda以及tensorrt进行加速

发布时间:2024年01月05日

?1、版本匹配

版本需要匹配,不然运行会报错

?2、onnxruntime之tensorrt加速

方式一:

OrtTensorRTProviderOptions trt_options{};	
trt_options.trt_max_workspace_size = 2147483648;
trt_options.trt_max_partition_iterations = 10;
trt_options.trt_min_subgraph_size = 5;
trt_options.trt_fp16_enable = 0;
trt_options.trt_int8_enable = 1;
trt_options.trt_int8_use_native_calibration_table = 0;
trt_options.trt_engine_cache_enable = 1;
//trt_options.trt_engine_cache_path = "cache"
trt_options.trt_dump_subgraphs = 1;  
sessionOptions.AppendExecutionProvider_TensorRT(trt_options);

方式二:

OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, 0);
OrtSessionOptionsAppendExecutionProvider_Tensorrt(sessionOptions, 0);

方式三:

const auto& api = Ort::GetApi();
OrtTensorRTProviderOptionsV2* tensorrt_options = nullptr;
api.CreateTensorRTProviderOptions(&tensorrt_options);
std::unique_ptr<OrtTensorRTProviderOptionsV2, decltype(api.ReleaseTensorRTProviderOptions)> rel_trt_options(
    tensorrt_options, api.ReleaseTensorRTProviderOptions);
std::vector<const char*> keys{"device_id", "trt_fp16_enable", "trt_int8_enable", "trt_engine_cache_enable","trt_engine_cache_path"};
std::vector<const char*> values{"0", "1", "0", "1","trt_engine_cache_path"};
api.UpdateTensorRTProviderOptions(rel_trt_options.get(), keys.data(), values.data(), keys.size());
api.SessionOptionsAppendExecutionProvider_TensorRT_V2(static_cast<OrtSessionOptions*>(sessionOptions),
                                                   rel_trt_options.get());

?

?

文章来源:https://blog.csdn.net/zk_ken/article/details/135411620
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。