Securing and Protecting Machine Learning Models on Embedded Devices

Learn how to secure and protect machine learning models on embedded devices from threats using authentication, encryption, and obfuscation techniques.

1. Introduction

Machine learning models deployed on embedded devices play a crucial role in various applications, from smart home devices to industrial automation. However, their deployment introduces unique challenges related to security and protection. In this section, we’ll explore the fundamental concepts and strategies for securing and safeguarding machine learning models on embedded devices.

Why is Model Security Important?
Machine learning models are vulnerable to attacks that can compromise their integrity, confidentiality, and availability. Ensuring model security is essential to prevent unauthorized access, data leakage, and adversarial manipulation. Let’s dive into the key aspects of model security.

Threats to Machine Learning Models on Embedded Devices
Embedded devices face several threats that can impact the security of deployed machine learning models. These threats include:

1. Physical Access: Attackers with physical access to the device can extract model parameters or inject malicious code.
2. Side-Channel Attacks: Adversaries can exploit power consumption, electromagnetic radiation, or timing information to infer model details.
3. Model Inversion: Attackers can reverse-engineer the model’s training data or predictions.
4. Adversarial Attacks: Crafted input data can cause misclassification or model malfunction.
5. Privacy Violations: Sensitive information within the model may be exposed.

Model Security
To enhance model security, consider the following measures:

1. Authentication: Implement strong authentication mechanisms to prevent unauthorized model access.
2. Encryption: Encrypt model weights, input data, and communication channels to protect against eavesdropping and tampering.
3. Access Control: Restrict model access based on user roles and permissions.
4. Secure Boot: Ensure the device boots only trusted firmware and software.
5. Secure Storage: Store model parameters securely to prevent unauthorized modifications.

Model Protection
Protecting the model itself involves techniques such as:

1. Obfuscation: Hide critical information (e.g., model architecture, hyperparameters) to deter attackers.
2. Watermarking: Embed unique identifiers in the model to trace ownership and detect unauthorized copies.
3. Code Signing: Sign model code to verify its integrity during execution.

Conclusion
Securing and protecting machine learning models on embedded devices requires a holistic approach. By understanding the threats and implementing robust security measures, developers can ensure the reliability and trustworthiness of their models in real-world deployments.

Remember, model security is an ongoing process. Stay informed about emerging threats and adapt your defenses accordingly. In the next sections, we’ll delve deeper into specific techniques for authentication, encryption, and obfuscation to fortify your machine learning models against potential attacks.

2. Threats to Machine Learning Models on Embedded Devices

Machine learning models deployed on embedded devices face a range of threats that can compromise their functionality and security. As a developer or system administrator, understanding these threats is crucial for safeguarding your models. Let’s explore the key threats and their implications:

1. Physical Access:
Embedded devices are often physically accessible, making them vulnerable to tampering. An attacker with physical access can extract model parameters, modify firmware, or inject malicious code. Consider securing physical access points and implementing tamper-evident measures.

2. Side-Channel Attacks:
Side-channel attacks exploit unintended information leakage during model execution. By analyzing power consumption, electromagnetic radiation, or timing variations, attackers can infer model details. Implement countermeasures such as masking, noise injection, or secure hardware design.

3. Model Inversion:
Model inversion attacks aim to recover sensitive information from the model. Adversaries can reverse-engineer the training data or infer input features. Protect against model inversion by limiting access to model predictions and ensuring differential privacy.

4. Adversarial Attacks:
Adversarial examples are crafted input data that cause misclassification or model malfunction. These attacks exploit vulnerabilities in the model’s decision boundaries. Regularize your model, use robust training techniques, and validate inputs to mitigate adversarial threats.

5. Privacy Violations:
Machine learning models may inadvertently leak sensitive information about individuals. Ensure compliance with privacy regulations (e.g., GDPR) and consider techniques like federated learning or differential privacy.

6. Resource Constraints:
Embedded devices have limited computational resources (CPU, memory, energy). Resource-intensive security mechanisms may impact model performance. Optimize security solutions to balance protection and efficiency.

7. Firmware Updates:
Updating firmware on embedded devices can introduce security risks if not done securely. Use secure boot mechanisms, signed updates, and over-the-air (OTA) encryption to prevent unauthorized modifications.

8. Supply Chain Attacks:
Malicious actors can compromise the supply chain (e.g., during manufacturing or distribution) to inject backdoors or modify firmware. Verify the integrity of components and establish a secure supply chain.

Remember that each threat requires tailored defenses. In the next sections, we’ll delve into specific strategies for model security and protection, including authentication, encryption, and obfuscation techniques. Stay vigilant and proactively address these threats to ensure the robustness of your machine learning models on embedded devices.

2.1. Model Security

Model Security: Protecting Your Machine Learning Models

Machine learning models are valuable assets, and their security is paramount. Whether your model runs on a tiny edge device or a powerful server, safeguarding it from threats is essential. In this section, we’ll explore practical strategies to enhance model security.

1. Authentication Mechanisms:
Authentication ensures that only authorized users can access your model. Consider the following techniques:

API Keys: Issue unique API keys to authorized clients. Validate keys before allowing model access.
OAuth: Use OAuth tokens for secure authentication between services.
JWT (JSON Web Tokens): Generate and validate JWTs to authenticate requests.

2. Role of Authentication in Model Security:
Authentication serves as the first line of defense. It prevents unauthorized access, data leakage, and malicious requests. By implementing strong authentication mechanisms, you reduce the risk of model compromise.

3. Implementing Secure Authentication:
Follow these best practices:

HTTPS: Always use HTTPS to encrypt communication between clients and your model server.
Rate Limiting: Limit the number of requests per client to prevent abuse.
Token Expiry: Set token expiration times to minimize exposure.

4. Encryption Techniques for Model Protection:
Encrypting model-related data ensures confidentiality. Explore these encryption methods:

Symmetric Encryption: Use a shared secret key for both encryption and decryption.
Asymmetric Encryption: Employ public-private key pairs for secure communication.

5. Secure Storage:
Where you store your model matters. Protect model weights, hyperparameters, and configuration files:

Hardware Security Modules (HSMs): Use HSMs for secure key storage.
Encrypted Databases: Store model parameters in encrypted databases.

Remember, model security is an ongoing process. Regularly audit access logs, monitor for anomalies, and stay informed about emerging threats. By implementing robust security practices, you can confidently deploy your machine learning models on embedded devices.

Next: In the upcoming sections, we’ll delve into encryption techniques and obfuscation strategies to further fortify your models against potential attacks.

2.2. Model Protection

Model Protection: Safeguarding Your Machine Learning Models

Protecting your machine learning models goes beyond authentication and encryption. Model protection involves additional strategies to ensure their integrity and resilience. In this section, we’ll explore techniques to fortify your models against attacks and unauthorized access.

1. Code Obfuscation:
Code obfuscation makes it challenging for attackers to understand and reverse-engineer your model. Consider the following approaches:

Minification: Minimize variable names, remove comments, and reduce whitespace.
Control Flow Obfuscation: Shuffle code execution paths to confuse reverse engineers.
Symbol Renaming: Rename function and variable names to obscure their purpose.

2. Data Obfuscation:
Protect sensitive data used during model training or inference. Techniques include:

Differential Privacy: Add noise to training data to prevent individual data points from being exposed.
Data Perturbation: Introduce small random variations to input features.
Data Masking: Encrypt or hash sensitive data before feeding it to the model.

3. Model Watermarking:
Embed unique identifiers (watermarks) into your model. If unauthorized copies appear, you can trace their origin. Watermarking doesn’t affect model performance but enhances security.

4. Secure Deployment:
When deploying your model, follow these practices:

Containerization: Use containers (e.g., Docker) to isolate your model from the host environment.
Secure APIs: Expose only necessary endpoints and validate input data.
Regular Updates: Keep your model and dependencies up to date to address security vulnerabilities.

5. Monitoring and Anomaly Detection:
Monitor model behavior and detect anomalies. Set up alerts for unexpected patterns or suspicious activity. Consider using tools like Prometheus or Grafana.

Remember that model protection is an ongoing effort. Regularly assess your security measures, stay informed about emerging threats, and adapt your defenses accordingly. By combining authentication, encryption, obfuscation, and vigilant monitoring, you can confidently deploy your machine learning models on embedded devices.

Next: In the final section, we’ll summarize best practices and provide actionable recommendations for securing and protecting your models in real-world scenarios.

3. Authentication Mechanisms

Authentication Mechanisms for Secure Model Access

When deploying machine learning models on embedded devices, ensuring secure authentication is crucial. By implementing robust authentication mechanisms, you can control access to your models and prevent unauthorized usage. Let’s explore the key aspects of authentication and practical steps to enhance model security.

Why Authentication Matters:
Authentication serves as the gatekeeper for your machine learning models. It verifies the identity of users or services attempting to access the model. Without proper authentication, your model could be exposed to unauthorized requests, data leakage, or malicious attacks.

Types of Authentication:
Consider the following authentication methods:

1. API Keys: Issue unique API keys to authorized clients. These keys act as credentials for accessing your model’s API endpoints. Validate the keys before allowing any requests.

2. OAuth (Open Authorization): OAuth tokens provide secure access between services. They are commonly used for third-party integrations and allow fine-grained control over permissions.

3. JWT (JSON Web Tokens): JWTs are compact, URL-safe tokens that carry claims about the user or client. They are commonly used for stateless authentication in web applications and APIs.

Best Practices for Secure Authentication:
Follow these guidelines to enhance model security:

Always Use HTTPS: Encrypt communication between clients and your model server using HTTPS. This prevents eavesdropping and data interception.

Rate Limiting: Limit the number of requests per client to prevent abuse. Implement rate-limiting mechanisms to avoid overloading your model.

Token Expiry: Set token expiration times. Short-lived tokens reduce the risk of exposure if they are compromised.

Example:
Suppose you have an edge device running a speech recognition model. To ensure secure access, you can issue an API key to the device. The device includes this key in its requests to the model server. The server validates the key and grants access only to authorized devices.

Conclusion:
Authentication is a fundamental step in securing your machine learning models. By implementing strong authentication mechanisms, you protect your models from unauthorized access and maintain their integrity.

Next: In the upcoming sections, we’ll explore encryption techniques and obfuscation strategies to further enhance model protection.

3.1. Role of Authentication in Model Security

Role of Authentication in Model Security

Authentication plays a critical role in securing your machine learning models on embedded devices. It ensures that only authorized users or services can access your models, preventing unauthorized requests and potential attacks. Let’s explore the significance of authentication and its practical implications.

Why Authentication Matters:
Authentication serves as the first line of defense for your models. It answers the fundamental question: “Who is allowed to access this model?” Without proper authentication, your model could be exposed to various risks:

Unauthorized Access: Anyone could send requests to your model, potentially compromising its integrity or leaking sensitive information.
Data Leakage: Malicious actors might extract model parameters or intercept data during communication.
Adversarial Attacks: Without authentication, your model becomes vulnerable to adversarial inputs designed to manipulate its behavior.

Types of Authentication:
Consider the following authentication methods:

1. API Keys: API keys act as credentials for accessing your model’s API endpoints. Each authorized client receives a unique key. Validate these keys before allowing requests.

2. OAuth (Open Authorization): OAuth tokens enable secure communication between services. They are commonly used for third-party integrations and allow fine-grained control over permissions.

3. JWT (JSON Web Tokens): JWTs are compact tokens that carry claims about the user or client. They are suitable for stateless authentication in web applications and APIs.

Best Practices for Secure Authentication:
Follow these guidelines to enhance model security:

Always Use HTTPS: Encrypt communication between clients and your model server using HTTPS. This prevents eavesdropping and data interception.

Rate Limiting: Limit the number of requests per client to prevent abuse. Implement rate-limiting mechanisms to avoid overloading your model.

Token Expiry: Set token expiration times. Short-lived tokens reduce the risk of exposure if they are compromised.

Example Scenario:
Imagine an edge device running a speech recognition model. To ensure secure access, you issue an API key to the device. The device includes this key in its requests to the model server. The server validates the key and grants access only to authorized devices.

Conclusion:
Authentication is not just a technical detail; it’s a critical aspect of model security. By implementing strong authentication mechanisms, you protect your models from unauthorized access and maintain their integrity.

Next: In the upcoming sections, we’ll explore encryption techniques and obfuscation strategies to further enhance model protection.

3.2. Implementing Secure Authentication

Implementing Secure Authentication for Your Machine Learning Models

Now that we understand the importance of authentication, let’s dive into practical steps for implementing secure authentication mechanisms. Whether you’re deploying models on edge devices or cloud servers, these guidelines will help you protect your models effectively.

1. Choose an Authentication Method:
Select an appropriate authentication method based on your use case:

API Keys: Ideal for simple client-server interactions. Issue unique API keys to authorized clients.
OAuth: Suitable for third-party integrations. Use OAuth tokens for secure communication.
JWT (JSON Web Tokens): Lightweight and versatile. Generate and validate JWTs for stateless authentication.

2. Validate Tokens:
When a client sends a request, validate the authentication token (e.g., API key or JWT). Ensure it’s not expired and matches an authorized user or service.

3. Rate Limiting:
Prevent abuse by limiting the number of requests per client. Implement rate-limiting mechanisms to avoid overloading your model server.

4. Token Expiry:
Set token expiration times. Short-lived tokens reduce the risk of exposure if they are compromised.

5. Secure Communication:
Always use HTTPS to encrypt communication between clients and your model server. This prevents eavesdropping and data interception.

Example Scenario:
Suppose you’re deploying a sentiment analysis model on an embedded device. You issue an API key to the device. When the device sends a request to the model server, it includes the API key. The server validates the key and grants access only if it’s valid.

Conclusion:
Implementing secure authentication ensures that only authorized users or services interact with your machine learning models. By following best practices, you enhance model security and maintain data integrity.

Next: In the upcoming sections, we’ll explore encryption techniques and obfuscation strategies to further enhance model protection.

4. Encryption Techniques for Model Protection

Encryption Techniques for Model Protection

Encrypting your machine learning models ensures their confidentiality and prevents unauthorized access. By applying encryption techniques, you can safeguard your models’ parameters, data, and communication channels. Let’s explore the key encryption methods for model protection:

1. Symmetric Encryption:
Symmetric encryption uses a shared secret key for both encryption and decryption. Here’s how it works:

Key Generation: Generate a secret key.
Encryption: Encrypt model weights, hyperparameters, and configuration files using the key.
Decryption: Decrypt the data when needed using the same key.

2. Asymmetric Encryption:
Asymmetric encryption involves public-private key pairs. Here’s how it enhances model security:

Key Pair Generation: Generate a public key (used for encryption) and a private key (used for decryption).
Model Encryption: Encrypt sensitive model data (e.g., weights) using the recipient’s public key.
Model Decryption: The recipient (e.g., model server) uses its private key to decrypt the data.

3. Secure Communication Channels:
When deploying models, ensure secure communication between clients and servers:

– Use HTTPS (TLS/SSL) to encrypt data in transit.
– Validate server certificates to prevent man-in-the-middle attacks.

Example Scenario:
Suppose you’re deploying a recommendation model on an edge device. You encrypt the model weights using symmetric encryption. During communication with the server, the device uses HTTPS to securely send requests and receive predictions.

Conclusion:
Encryption is a powerful tool for protecting your machine learning models. By applying symmetric or asymmetric encryption and securing communication channels, you enhance model security and maintain data integrity.

Next: In the final sections, we’ll explore obfuscation strategies and summarize best practices for securing and protecting your models on embedded devices.

4.1. Symmetric Encryption

Symmetric Encryption: Protecting Your Model Data

Symmetric encryption is a powerful technique for securing your machine learning models. It relies on a shared secret key for both encryption and decryption. Let’s explore how symmetric encryption works and how you can apply it to protect your model data:

How Symmetric Encryption Works:
1. Key Generation: You generate a secret key (a long, random string).
2. Encryption: When storing or transmitting sensitive data (such as model weights or hyperparameters), you encrypt it using the secret key.
3. Decryption: When needed (e.g., during model inference), you decrypt the data using the same secret key.

Benefits of Symmetric Encryption:
Efficiency: Symmetric encryption is fast and computationally efficient, making it suitable for resource-constrained devices.
Confidentiality: Encrypted data remains confidential even if intercepted by unauthorized parties.

Best Practices:
Follow these steps to implement symmetric encryption for your models:

1. Choose a Strong Key: Generate a secure secret key. Avoid using easily guessable keys.
2. Protect the Key: Safeguard the secret key. Store it securely (e.g., in a hardware security module or encrypted database).
3. Encrypt Model Data: Encrypt sensitive model components (e.g., weights, configuration files) before deployment.
4. Secure Communication: Use symmetric encryption for communication between clients and servers (e.g., HTTPS).

Example Scenario:
You’re deploying an image classification model on an edge device. Before storing the model weights, you encrypt them using symmetric encryption. During inference, the device decrypts the weights using the same secret key.

Conclusion:
Symmetric encryption provides a robust way to protect your model data. By applying it, you ensure confidentiality and maintain the integrity of your machine learning models.

Next: In the following sections, we’ll explore asymmetric encryption and obfuscation strategies to further enhance model security.

4.2. Asymmetric Encryption

Asymmetric Encryption: Enhancing Model Security with Public-Private Key Pairs

Asymmetric encryption, also known as public-key cryptography, provides a robust way to protect your machine learning models and their sensitive data. Unlike symmetric encryption, which relies on a shared secret key, asymmetric encryption involves a pair of keys: a public key and a private key. Let’s explore how asymmetric encryption works and how you can apply it to enhance model security.

How Asymmetric Encryption Works:
1. Key Pair Generation: You generate a key pair:
Public Key: This key is widely distributed and used for encryption.
Private Key: This key remains confidential and is used for decryption.
2. Model Encryption: When storing or transmitting sensitive data (such as model weights or configuration files), you encrypt it using the recipient’s public key.
3. Model Decryption: The recipient (e.g., your model server) uses its private key to decrypt the data.

Benefits of Asymmetric Encryption:
Confidentiality: Encrypted data remains confidential even if intercepted by unauthorized parties.
Authentication: Asymmetric encryption also enables digital signatures, allowing you to verify the authenticity of messages or model updates.

Best Practices:
Follow these steps to implement asymmetric encryption for your models:

1. Choose Strong Key Sizes: Generate sufficiently long key pairs (e.g., 2048 bits or more) to resist attacks.
2. Protect the Private Key: Safeguard the private key. Store it securely (e.g., in a hardware security module).
3. Secure Communication Channels: Use asymmetric encryption for communication between clients and servers (e.g., HTTPS with TLS/SSL).

Example Scenario:
You’re deploying a fraud detection model on a cloud server. Before transmitting model updates, you encrypt them using the recipient’s public key. The server uses its private key to decrypt and apply the updates.

Conclusion:
Asymmetric encryption provides strong security guarantees for your machine learning models. By leveraging public-private key pairs, you enhance confidentiality and ensure the integrity of your model data.

Next: In the following sections, we’ll explore obfuscation strategies and summarize best practices for securing and protecting your models on embedded devices.

5. Obfuscation Strategies

Obfuscation Strategies: Concealing Model Details for Enhanced Security

Obfuscation is a technique used to obscure or hide critical information within your machine learning models. By applying obfuscation strategies, you make it harder for attackers to reverse-engineer your models or extract sensitive details. Let’s explore some effective obfuscation techniques:

1. Code Obfuscation:
Purpose: Hide the model’s architecture, hyperparameters, and implementation details.
How: Rename variables, functions, and classes to non-descriptive names. Remove comments and whitespace.
Benefits: Makes it challenging for adversaries to understand the model’s inner workings.

2. Data Obfuscation:
Purpose: Protect training data and prevent model inversion attacks.
How: Apply differential privacy, add noise to training data, or use synthetic data.
Benefits: Guards against data leakage and ensures privacy.

3. Model Watermarking:
Purpose: Embed unique identifiers (watermarks) within the model.
How: Modify model weights or architecture to include watermarks.
Benefits: Helps trace ownership and detect unauthorized copies.

Example Scenario:
You’re deploying a speech recognition model on an embedded device. To protect against model inversion, you apply data obfuscation by adding noise to the training data. Additionally, you use code obfuscation to obscure the model’s architecture and variable names.

Conclusion:
Obfuscation complements encryption and authentication in securing your machine learning models. By concealing critical information, you enhance model protection and reduce the risk of unauthorized access.

Next: In the final section, we’ll summarize best practices and provide recommendations for securing and protecting your models on embedded devices.

5.1. Code Obfuscation

Code Obfuscation: Concealing Model Implementation Details

Code obfuscation is a technique used to make your machine learning model’s code less readable and harder to reverse-engineer. By applying code obfuscation strategies, you protect sensitive information such as the model’s architecture, hyperparameters, and implementation details. Let’s explore how to effectively obfuscate your model code:

1. Rename Variables and Functions:
– Change descriptive variable and function names to non-meaningful ones.
– Example: Replace “model_weights” with “a1b2c3_weights.”

2. Remove Comments and Whitespace:
– Strip out comments and unnecessary whitespace.
– Example: Remove explanatory comments that reveal implementation details.

3. Minify JavaScript and CSS:
– If deploying web-based models, minify JavaScript and CSS files.
– Example: Use tools like UglifyJS or Terser to compress code.

4. Control Flow Obfuscation:
– Modify control flow structures (loops, conditionals) to confuse attackers.
– Example: Introduce dummy loops or nested conditionals.

5. String Encryption:
– Encrypt sensitive strings (e.g., API keys, URLs) within your code.
– Example: Store encrypted API keys and decrypt them during runtime.

Example Scenario:
You’re deploying a sentiment analysis model as a Python package. Before distributing it, you apply code obfuscation by renaming variables, removing comments, and minifying the code. This ensures that the model’s implementation details remain hidden.

Conclusion:
Code obfuscation adds an extra layer of security to your machine learning models. By making the code less transparent, you reduce the risk of unauthorized access and protect your intellectual property.

Next: In the final section, we’ll summarize best practices and provide recommendations for securing and protecting your models on embedded devices.

5.2. Data Obfuscation

Data Obfuscation: Protecting Model Privacy and Preventing Model Inversion

Data obfuscation is a critical technique for safeguarding your machine learning models against attacks that exploit training data. By applying data obfuscation strategies, you can protect sensitive information, maintain privacy, and prevent model inversion. Let’s dive into effective data obfuscation methods:

1. Differential Privacy:
Purpose: Add controlled noise to training data to protect individual records.
How: Perturb input features during data preprocessing.
Benefits: Balances privacy and utility, making it harder to infer specific data points.

2. Noisy Labels:
Purpose: Prevent attackers from learning the exact labels of training samples.
How: Introduce label noise by flipping labels or adding random errors.
Benefits: Guards against model inversion attacks.

3. Synthetic Data Generation:
Purpose: Create artificial data points that resemble real samples.
How: Use generative models (e.g., GANs) to generate synthetic data.
Benefits: Enhances privacy without compromising model performance.

Example Scenario:
You’re training a medical diagnosis model using patient health records. To protect patient privacy, you apply differential privacy by adding controlled noise to the features. Additionally, you introduce noisy labels to prevent attackers from inferring specific patient conditions.

Conclusion:
Data obfuscation is essential for maintaining privacy and preventing model inversion attacks. By applying these techniques, you ensure that your machine learning models remain robust and trustworthy.

Next: In the final section, we’ll summarize best practices and provide recommendations for securing and protecting your models on embedded devices.

6. Conclusion and Best Practices

Conclusion and Best Practices for Securing Machine Learning Models on Embedded Devices

Congratulations! You’ve learned essential techniques to secure and protect your machine learning models on embedded devices. As you wrap up this guide, let’s summarize the key takeaways and provide best practices:

1. Understand Threats:
– Familiarize yourself with threats specific to embedded devices, such as physical access, side-channel attacks, and model inversion.

2. Model Security:
– Implement strong authentication mechanisms to control model access.
– Encrypt model weights, input data, and communication channels.
– Securely store model parameters using hardware security modules (HSMs) or encrypted databases.

3. Model Protection:
– Use obfuscation techniques like code obfuscation and data obfuscation.
– Embed watermarks in your model to trace ownership and detect unauthorized copies.

4. Asymmetric Encryption:
– Leverage public-private key pairs for secure communication.
– Protect your private key and use HTTPS with TLS/SSL.

5. Best Practices:
– Regularly audit access logs and monitor for anomalies.
– Stay informed about emerging threats and adapt your defenses accordingly.

Remember: Security is an ongoing process. Continuously assess and enhance your model’s defenses to stay ahead of potential attacks.

Next Steps:
– Apply these principles to your specific use case and embedded device.
– Explore additional resources on secure deployment and model privacy.

Thank you for joining us on this journey to secure and protect machine learning models on embedded devices. Your commitment to robustness and reliability ensures the success of your projects. Happy coding!

Attribution:
This guide was prepared by the author based on research and industry best practices.

Sources:
– [Secure Machine Learning Deployment: A Comprehensive Survey](https://arxiv.org/abs/2003.01189

Leave a Reply

Your email address will not be published. Required fields are marked *