Protect the Models

Table of contents

  1. Class SkProtectorPyTorch
    1. Public constructor __init__
    2. Public methods protect_new_model and protect_updated_model
    3. Public method migrate_configuration
    4. Public method update_customer
  2. Class Customer
    1. Public constructor __init__

Migration from former versions

Migrations from version bellow v0.1.0

The version v0.1.0 introduced changes to the protection API. Here are the steps to migrate to the v0.1.X versions:

  1. Replace the debug_logs argument from the protect call by save_logs in the SkProtector constructor arguments.
  2. Remove the output_path argument from the SkProtector constructor arguments.
  3. Replace the protect call by either protect_new_model or protect_updated_model.
  4. Rename the parameter list_input_tensor to input_tensors.
  5. Rename the parameter protected_model_format to model_export_formats.
  6. Replace layer_types_to_protect and filtered_layers usage by the new layer_filter parameter.

Migrations from version bellow v0.2.0

The version v0.2.0 introduced support for torch.export AOTInductor and optimizations for TorchScript export. Here are the steps to migrate to the v0.2.X versions:

  1. The parameter model_export_formats take new values: SkProtectorPyTorch.ExportFormat.ONNX, SkProtectorPyTorch.ExportFormat.TORCHSCRIPT_CPU SkProtectorPyTorch.ExportFormat.TORCHSCRIPT_GPU, SkProtectorPyTorch.ExportFormat.INDUCTOR_CPU, SkProtectorPyTorch.ExportFormat.INDUCTOR_GPU.
  2. When protecting a model with different model_export_formats, the model will be protected at most 3 times, one for CPU versions (TorchScript + AOTInductor), one for GPU versions (TorchScript + AOTInductor) and another one for the ONNX version.

Class SkProtectorPyTorch

In a Python script, import the SkProtectorPyTorch class:

from skprotect_pytorch.protect import SkProtectorPyTorch

Public constructor __init__

Before applying the protection, an SkProtectorPyTorch object should be created.

protector = SkProtectorPyTorch(
    save_logs: Optional[bool] = False,
    application_name: str = "default_app",
)

The constructor takes the parameters:

  • (Optional) save_logs: A boolean with a default value of False, which should be set to True if errors occur during protection, allowing logs to be saved in a folder named ./log that can later be transmitted to Skyld.
  • (Optional) application_name: The name of the application. If you want to have the models protected with a different activation key per deployement, and then different protected parameters values (weights and biases), use a new unique application name per deployment.

Public methods protect_new_model and protect_updated_model

  • protect_new_model: Should be used while protecting a model for the first time. This will add the model properties to the configuration file. The configuration file will be created if it does not exist.
  • protect_updated_model: Should be used while replacing a model already deployed. It can be either an updated model or a new model (the model name should be the same as one already use). This will replace the model properties in the existing configuration file.
  • When using protect_updated_model, the model name must be the same as the original model name you are replacing.
  • All the model protection scripts should be lanched from the same directory to avoid configuration file de-duplication.
  • All files are saved in outputs/default_app (the full path is shown in the console when you run protect_*).

To apply the protection on a given model, call the function protect_new_model or protect_updated_model of the created SkProtectorPyTorch object with the necessary arguments to adjust protection to your needs. The function signature indicating types and default arguments is as follows:

def protect_<new|updated>_model(
        self,
        model: torch.nn.Module,
        model_name: str,
        input_tensors: List[Tensor],
        layer_filter: Optional[Callable[[str, torch.nn.Module],
                               bool]] = lambda _name, _module: True,
        model_export_formats: ExportFormat = ExportFormat.ONNX
        | ExportFormat.INDUCTOR_CPU
        | ExportFormat.INDUCTOR_GPU,
        deployment_platform: ExportPlatform = ExportPlatform.TXT,
        dynamic_shapes : Tuple[Dict[int,torch.export.Dim | None]] | List[Dict[int,torch.export.Dim | None]] | None = None
        onnx_params: Optional[dict] = default_onnx_params,
        inductor_configs : Optional[dict] = inductor_configs,
        save_original: Optional[bool] = False,
        quantize_compatibility : Optional[bool] = False,
        test_key: Optional[bool] = False,
        platform_version: Optional[int] = 2,
        customer: Optional[Customer] = Customer(),
    )
  • model: The model to be protected (a torch.nn.Module object). The model must be loaded beforehand and already trained.
  • model_name: The name of the model without file extension. This name will be used to save the protected model.
  • input_tensors: A list of torch tensors that should be passed to the model, each tensor of the list is one input of the model. Theses tensors are used for model export and to check graph integrity, the values of theses tensors does not affect the protection. For example if 2 inputs tensors of shapes (1,3,32,32) and (1,10) respectively are needed, the list can be constructed as follows: input_tensors=[torch.randn(1,3,32,32), torch.randn(1,10)]
  • (Optional) layer_filter: A function to filter the layers to protect from the layer name and torch.nn.Module. This function will be called with each supported layer, the layer will be protected depending on the returned boolean value. By default, all the supported layers will be protected. Supported layers types are:
    • torch.nn.Linear
    • torch.nn.Conv1d
    • torch.nn.Conv2d
    • torch.nn.ConvTranspose1d
    • torch.nn.ConvTranspose2d
    • torch.nn.MultiheadAttention
    • torch.nn.LSTM
    • torch.nn.GRU

To get the names of the layers in your network, you can use the following script or a model visualizer like netron.app if the structure is complex:

for name, module in model.named_modules(): # explore layers
    print(name) # Name of the layer
    type(module) # Type of the layer
  • (Optional) model_export_formats: An SkProtectorPyTorch.ExportFormatvalue indicating the format to use while saving the protected model. By default, ONNX exports is tried. It is possible to combine multiple export format with the operator |. The values of SkProtectorPyTorch.ExportFormat are described in the next table:
Value Usage
TORCHSCRIPT_CPU Save the protected model as a TorchScript file for CPU deployment.
TORCHSCRIPT_GPU Save the protected model as a TorchScript file for CUDA deployment.
INDUCTOR_CPU Save the protected model as a AOTInductor file using torch.export AOTInductor for CPU deployment.
INDUCTOR_GPU Save the protected model as a AOTInductor file using torch.export AOTInductor for CUDA deployment
ONNX Save the protected model as an ONNX file (same file for CPU and GPU deployment).

If you use all the available export formats, the model will be protected three times, one for CPU versions (TorchScript + AOTInductor), one for GPU versions (TorchScript + AOTInductor) and another one for the ONNX version. This is equivalent as protecting three different models (with a different activation key for each).

  • (Optional) deployment_platform: An SkProtectorPyTorch.ExportPlatform value indicating for which deployment platform the key should be saved. By default, the key is exported as TXT. The possible values are described in the next table.

  • (Optional) dynamic_shapes: A parameter for defining the dynamic dimensions of the model inputs for both torch and onnx exports. This parameter must be consistent with the dynamic_shapes argument of the onnx_params if provided. If you use instead dynamic_axes in onnx_params, then you should set the value to None.

If you have some dynamic dimensions in dynamic_shapes then the corresponding tensor in input_tensors must have a dim >1. ex: input_tensors=[torch.randn(2,224,224,3)] if you expect (batch,224,224,3) as input.

Value Usage
LINUX Save the key to be imported in the Skyld Linux Keystore. This option will generate:
- A configuration file named skrpl
- Your protected model named <protected_model_name>.model_ext where model_ext is .onnx or .pt2
- An encrypted key per protected model named mev*
TXT Save the keys as text file in plaintext. Two files will be outputted per models, they are required to run the model using the runners described in this section.
  • (Optional) onnx_params: A dictionary containing arguments (key/value pairs) to use when exporting to the ONNX format, such as input and output tensor names. These arguments are passed internally to the torch.onnx.export command. The default values are listed below and can be modified if a different configuration is needed.
onnx_params={
    "input_names": ["input.1"],
    "output_names": ["output.1"],
    "dynamic_axes": None,
}

The “input_names” key argument must not be empty, otherwise the program will raise an exception.

  • (Optional) inductor_configs: A dictionary containing arguments to use when exporting using torch.export AOTInductor. Please refer to this documentation for more details.

  • (Optional) save_original: A boolean with a default value of False, which should be set to True if you want to save the original model in ONNX/TorchScript/AOTInductor formats for comparison purposes. Note that the onnx_params dictionary used is the same for both the original and protected models.

  • (Optional) quantize_compatibility: A boolean with default value False, which should be set to True if you want to quantize the protected model (see section quantization).

  • (Optional) test_key: A boolean with a default value of False, which should be set to True if you want to disable randomized session key. This option allow you to test the results of your model without the runners using a static activation key as input. Once enabled, key_vector_inv_<model_name>.txt file content should be inputted as last input while running the model in the environment of your choice. (⚠️ if using a GPU, see section GPU support for protected model to avoid differences between model results).
  • (Optional) platform_version: An int value with a default value of 2 that defines the version of the configuration file that will be exported after protection. If models were protected with a lower version of SkProtect-Pytorch, the script will automatically perform a migration. The migration requires having the old protected models contained in the outputs directory of the application (within the model’s subdirectory). If you do not have the old protected models, re-protect all models in a clean environment. In practice, you should change the platform_version default value only if your SkRuntime version is not compatible with the platform version (see Platform version compatibility).
  • (Optional) customer: A skprotect_pytorch.Customer instance with a default value of Customer(1, "") to specify the customer id and name associated with the current deployement. The choosen customer id will be used to track and limit usage of the models for a given customer on the license server. Requires platform_version >=2.

With platform_version >=1, model integrity is checked on the runtime side. If you post-process your model, it will not be reconized by SkRuntime. If you need model post processing, contact Skyld.

Platform version compatibility

Platform version SkProtect minimal version SkRuntime minimal version Licensing server minimal version
0 0.0.2 0.0.0 0.0.0
1 0.2.4 0.1.0 0.0.0
2 0.2.5 0.1.1 0.0.3

Public method migrate_configuration

Migrate the configuration file to a greater version. The configuration file from the execution environment will be replaced by one with the targeted version.

The migration requires having the old protected models contained in the outputs directory of the application (within the model’s subdirectory). If you do not have the old protected models, re-protect all models in a clean environment.

def migrate_configuration(self, version: Optional, deployment_platform: ExportPlatform[int], customer: Optional[Customer] = Customer())
  • version: An int value containing the platform version targeted by the migration.
  • deployment_platform: A single SkProtectorPyTorch.ExportPlatform value indicating for which deployment platform the configuration should be migrated.
  • (Optional) customer: Only used if migrating from version 1 to 2. An instance of skprotect_pytorch.Customer to be associated with the generated configuration file.

Public method update_customer

Update the customer of the current configuration file. This function should be run in an environment containing a configuraton file with a platform version 2. The configuration file from the execution environment will be replaced by one with the targeted customer id.

def update_customer(self, customer: Customer, deployment_platform: ExportPlatform{)
  • customer: An instance of skprotect_pytorch.Customer corresponding to the customer that should be associated with the configuration file.
  • deployment_platform: An SkProtectorPyTorch.ExportPlatform value indicating for which deployment platform the configuration should be updated.

Class Customer

Represent a customer for licensing purposes.

Public constructor __init__

Initiate a customer instance, arguments:

  • (Optional) id: The customer id (should exist on the license server) with a default value of 1.
  • (Optional) name: The customer name with an empty string as a default value.