概述
為了使ModelScope的使用者能夠快速、方便的使用平台提供的各類模型,提供了一套功能完備的
Python library
,其中包含了ModelScope官方模型的實作,以及使用這些模型進行推理,finetune等任務所需的數據預處理,後處理,效果評估等功能相關的程式碼,同時也提供了簡單易用的API,以及豐富的使用樣例。透過呼叫library,使用者可以只寫短短的幾行程式碼,就可以完成模型的推理、訓練和評估等任務,也可以在此基礎上快速進行二次開發,實作自己的創新想法。
目前library提供的演算法模型,涵蓋了影像,自然語言處理,語音,多模態,科學5個主要的AI領域,數十個套用場景任務,具體任務可參考文件:任務的介紹。
深度學習框架
ModelScope Library
當前支持的深度學習框架包括
Pytorch
和
Tensorflow
,後續將持續更新拓展,敬請期待! 當前的官方模型均支持使用
ModelScope Library
進行模型推理,部份支持使用該庫進行訓練和評估,具體可參看相應模型的模型卡片,了解完整使用資訊。
模型推理Pipeline
模型的推理
推理在深度學習中表示模型的預測過程。ModelScope的推理會使用pipeline來執行所需要的操作。一個完整的pipeline一般包括了
數據的前處理
、
模型的前向推理
、
數據的後處理
三個過程。
Pipeline介紹
pipeline()
方法是ModelScope框架上最基礎的使用者方法之一,可對多領域的多種模型進行快速推理。透過pipeline()方法,使用者可以只需要一行程式碼即可完成對特定任務的模型推理。
pipeline()
方法是ModelScope框架上最基礎的使用者方法之一,可對多領域的多種模型進行快速推理。透過pipeline()方法,使用者可以只需要一行程式碼即可完成對特定任務的模型推理。
Pipeline的使用
本文簡單介紹如何使用
pipeline
方法載入模型進行推理。
pipeline
方法支持按照任務型別、模型名稱從模型倉庫拉取模型進行進行推理,包含以下幾個方面:
環境準備
重要參數
Pipeline基本用法
指定預處理、模型進行推理
不同場景任務推理pipeline使用範例
Pipeline基本用法
中文分詞
pipeline
函式支持指定特定任務名稱,載入任務預設模型,建立對應pipeline物件。
Python程式碼
from modelscope.pipelines import pipeline
word_segmentation = pipeline('word-segmentation')
input_str = '開源技術小棧作者是Tinywan,你知道不?'
print(word_segmentation(input_str))
PHP 程式碼
<?php
$operator = PyCore::import("operator");
$builtins = PyCore::import("builtins");
$pipeline = PyCore::import('modelscope.pipelines')->pipeline;
$word_segmentation = $pipeline("word-segmentation");
$input_str = "開源技術小棧作者是Tinywan,你知道不?";
PyCore::print($word_segmentation($input_str));
線上轉換工具:https://www.swoole.com/py2php/
輸出結果
/usr/local/php-8.2.14/bin/php demo.php
2024-03-25 21:41:42,434 - modelscope - INFO - PyTorch version 2.2.1 Found.
2024-03-25 21:41:42,434 - modelscope - INFO - Loading ast index from /home/www/.cache/modelscope/ast_indexer
2024-03-25 21:41:42,577 - modelscope - INFO - Loading done! Current index file version is 1.13.0, with md5 f54e9d2dceb89a6c989540d66db83a65 and a total number of 972 components indexed
2024-03-25 21:41:44,661 - modelscope - WARNING - Model revision not specified, use revision: v1.0.3
2024-03-25 21:41:44,879 - modelscope - INFO - initiate model from /home/www/.cache/modelscope/hub/damo/nlp_structbert_word-segmentation_chinese-base
2024-03-25 21:41:44,879 - modelscope - INFO - initiate model from location /home/www/.cache/modelscope/hub/damo/nlp_structbert_word-segmentation_chinese-base.
2024-03-25 21:41:44,880 - modelscope - INFO - initialize model from /home/www/.cache/modelscope/hub/damo/nlp_structbert_word-segmentation_chinese-base
You are using a model of type bert to instantiate a model of type structbert. This is not supported for all configurations of models and can yield errors.
2024-03-25 21:41:48,633 - modelscope - WARNING - No preprocessor field found in cfg.
2024-03-25 21:41:48,633 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2024-03-25 21:41:48,633 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/home/www/.cache/modelscope/hub/damo/nlp_structbert_word-segmentation_chinese-base'}. trying to build by task and model information.
2024-03-25 21:41:48,639 - modelscope - INFO - cuda is not available, using cpu instead.
2024-03-25 21:41:48,640 - modelscope - WARNING - No preprocessor field found in cfg.
2024-03-25 21:41:48,640 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2024-03-25 21:41:48,640 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/home/www/.cache/modelscope/hub/damo/nlp_structbert_word-segmentation_chinese-base', 'sequence_length': 512}. trying to build by task and model information.
/home/www/anaconda3/envs/tinywan-modelscope/lib/python3.10/site-packages/transformers/modeling_utils.py:962: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
{'output': ['開源', '技術', '小', '棧', '作者', '是', 'Tinywan', ',', '你', '知道', '不', '?']}
輸入多條樣本
pipeline
物件也支持傳入多個樣本列表輸入,返回對應輸出列表,每個元素對應輸入樣本的返回結果。多條文本的推理方式是輸入data在
pipeline
內部用叠代器單條處理後
append
到同一個返回
List
中。
Python程式碼
from modelscope.pipelines import pipeline
word_segmentation = pipeline('word-segmentation')
inputs = ['開源技術小棧作者是Tinywan,你知道不?','webman這個框架不錯,建議你看看']
print(word_segmentation(inputs))
PHP 程式碼
<?php
$operator = PyCore::import("operator");
$builtins = PyCore::import("builtins");
$pipeline = PyCore::import('modelscope.pipelines')->pipeline;
$word_segmentation = $pipeline("word-segmentation");
$inputs = new PyList(["開源技術小棧作者是Tinywan,你知道不?", "webman這個框架不錯,建議你看看"]);
PyCore::print($word_segmentation($inputs));
輸出
[{'output': ['開源', '技術', '小', '棧', '作者', '是', 'Tinywan', ',', '你', '知道', '不', '?']},
{'output': ['webman', '這個', '框架', '不錯', ',', '建議', '你', '看看']}]
批次推理
pipeline對於批次推理的支持類似於上面的「輸入多條文本」,區別在於會在使用者指定的batch_size尺度上,在模型forward過程實作批次前向推理。
inputs = ['今天天氣不錯,適合出去遊玩','這本書很好,建議你看看']
# 指定batch_size參數來支持批次推理
print(word_segmentation(inputs, batch_size=2))
# 輸出
[{'output': ['今天', '天氣', '不錯', ',', '適合', '出去', '遊玩']}, {'output': ['這', '本', '書', '很', '好', ',', '建議', '你', '看看']}]
輸入一個數據集
from modelscope.msdatasets import MsDataset
from modelscope.pipelines import pipeline
inputs = ['今天天氣不錯,適合出去遊玩', '這本書很好,建議你看看']
dataset = MsDataset.load(inputs, target='sentence')
word_segmentation = pipeline('word-segmentation')
outputs = word_segmentation(dataset)
for o in outputs:
print(o)
# 輸出
{'output': ['今天', '天氣', '不錯', ',', '適合', '出去', '遊玩']}
{'output': ['這', '本', '書', '很', '好', ',', '建議', '你', '看看']}
指定預處理、模型進行推理
pipeline函式支持傳入例項化的預處理物件、模型物件,從而支持使用者在推理過程中客製化預處理、模型。
建立模型物件進行推理
Python程式碼
from modelscope.models import Model
from modelscope.pipelines import pipeline
model = Model.from_pretrained('damo/nlp_structbert_word-segmentation_chinese-base')
word_segmentation = pipeline('word-segmentation', model=model)
inputs = ['開源技術小棧作者是Tinywan,你知道不?','webman這個框架不錯,建議你看看']
print(word_segmentation(inputs))
PHP 程式碼
<?php
$operator = PyCore::import("operator");
$builtins = PyCore::import("builtins");
$Model = PyCore::import('modelscope.models')->Model;
$pipeline = PyCore::import('modelscope.pipelines')->pipeline;
$model = $Model->from_pretrained("damo/nlp_structbert_word-segmentation_chinese-base");
$word_segmentation = $pipeline("word-segmentation", model: $model);
$inputs = new PyList(["開源技術小棧作者是Tinywan,你知道不?", "webman這個框架不錯,建議你看看"]);
PyCore::print($word_segmentation($inputs));
輸出
[{'output': ['開源', '技術', '小', '棧', '作者', '是', 'Tinywan', ',', '你', '知道', '不', '?']},
{'output': ['webman', '這個', '框架', '不錯', ',', '建議', '你', '看看']}]
建立預處理器和模型物件進行推理
from modelscope.models import Model
from modelscope.pipelines import pipeline
from modelscope.preprocessors import Preprocessor, Token classificationTransformersPreprocessor
model = Model.from_pretrained('damo/nlp_structbert_word-segmentation_chinese-base')
tokenizer = Preprocessor.from_pretrained(model.model_dir)
# Or call the constructor directly:
# tokenizer = Token classificationTransformersPreprocessor(model.model_dir)
word_segmentation = pipeline('word-segmentation', model=model, preprocessor=tokenizer)
inputs = ['開源技術小棧作者是Tinywan,你知道不?','webman這個框架不錯,建議你看看']
print(word_segmentation(inputs))
[{'output': ['開源', '技術', '小', '棧', '作者', '是', 'Tinywan', ',', '你', '知道', '不', '?']},
{'output': ['webman', '這個', '框架', '不錯', ',', '建議', '你', '看看']}]
影像
註意 :
確保你已經安裝了OpenCV庫。如果沒有安裝,你可以透過
pip
安裝
pip install opencv-python
沒有安裝會提示:
PHP Fatal error: Uncaught PyError: No module named 'cv2' in /home/www/build/ai/demo3.php:4
確保你已經安裝深度學習框架包
TensorFlow
庫
否則提示
modelscope.pipelines.cv.image_matting_pipeline requires the TensorFlow library but it was not found in your environment. Checkout the instructions on the installation page: https://www.tensorflow.org/install and follow the ones that match your environment.
。
報錯資訊表明,你正在嘗試使用一個名為
modelscope.pipelines.cv.image_matting_pipeline
的模組,該模組依賴於
TensorFlow
庫。然而,該模組無法正常工作,因為缺少必要的
TensorFlow
依賴。
可以使用以下命令安裝最新版本的 TensorFlow
pip install tensorflow
人像摳圖('portrait-matting')
輸入圖片
Python 程式碼
import cv2
from modelscope.pipelines import pipeline
portrait_matting = pipeline('portrait-matting')
result = portrait_matting('https://modelscope.oss-cn-beijing.aliyuncs.com/test/images/image_matting.png')
cv2.imwrite('result.png', result['output_img'])
PHP 程式碼
tinywan-images.php
<?php
$operator = PyCore::import("operator");
$builtins = PyCore::import("builtins");
$cv2 = PyCore::import('cv2');
$pipeline = PyCore::import('modelscope.pipelines')->pipeline;
$portrait_matting = $pipeline("portrait-matting");
$result = $portrait_matting("https://modelscope.oss-cn-beijing.aliyuncs.com/test/images/image_matting.png");
$cv2->imwrite("tinywan_result.png", $result->__getitem__("output_img"));
載入本地檔圖片
$result = $portrait_matting("./tinywan.png");
執行結果
/usr/local/php-8.2.14/bin/php tinywan-images.php
2024-03-2522:17:25,630 - modelscope - INFO - PyTorch version 2.2.1 Found.
2024-03-2522:17:25,631 - modelscope - INFO - TensorFlow version 2.16.1 Found.
2024-03-2522:17:25,631 - modelscope - INFO - Loading ast index from /home/www/.cache/modelscope/ast_indexer
2024-03-2522:17:25,668 - modelscope - INFO - Loading done! Current index file version is 1.13.0, with md5 f54e9d2dceb89a6c989540d66db83a65 and a total number of 972 components indexed
2024-03-2522:17:26,990 - modelscope - WARNING - Model revision not specified, use revision: v1.0.0
2024-03-2522:17:27.623085: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-03-2522:17:27.678592: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-2522:17:28.551510: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-03-2522:17:29,206 - modelscope - INFO - initiate model from /home/www/.cache/modelscope/hub/damo/cv_unet_image-matting
2024-03-2522:17:29,206 - modelscope - INFO - initiate model from location /home/www/.cache/modelscope/hub/damo/cv_unet_image-matting.
2024-03-2522:17:29,209 - modelscope - WARNING - No preprocessor field found in cfg.
2024-03-2522:17:29,210 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.
2024-03-2522:17:29,210 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/home/www/.cache/modelscope/hub/damo/cv_unet_image-matting'}. trying to build by task and model information.
2024-03-2522:17:29,210 - modelscope - WARNING - Find task: portrait-matting, model type: None. Insufficient information to build preprocessor, skip building preprocessor
WARNING:tensorflow:From /home/www/anaconda3/envs/tinywan-modelscope/lib/python3.10/site-packages/modelscope/utils/device.py:60: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2024-03-2522:17:29,213 - modelscope - INFO - loading model from /home/www/.cache/modelscope/hub/damo/cv_unet_image-matting/tf_graph.pb
WARNING:tensorflow:From /home/www/anaconda3/envs/tinywan-modelscope/lib/python3.10/site-packages/modelscope/pipelines/cv/image_matting_pipeline.py:45: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
2024-03-2522:17:29,745 - modelscope - INFO - load model done
輸出圖片