0 / 0
Applying homomorphic encryption for security and privacy

Applying homomorphic encryption for security and privacy

Federated learning supports homomorphic encryption as an added measure of security for federated training data. Homomorphic encryption is a form of public key cryptography that enables computations on the encrypted data without first decrypting it, meaning the data can be used in modeling without exposing it to the risk of discovery.

With homomorphic encryption, the results of the computations remain in encrypted form and when decrypted, result in an output that is the same as the output produced with computations performed on unencrypted data. It uses a public key for encryption and a private key for decryption.

How it works with Federated Learning

Homomorphic encryption is an optional encryption method to add additional security and privacy to a Federated Learning experiment. When homomorphic encryption is applied in a Federated Learning experiment, the parties send their homomorphically encrypted model updates to the aggregator. The aggregator does not have the private key and can only see the homomorphically encrypted model updates. For example, the aggregator cannot reverse engineer the model updates to discover information on the parties' training data. The aggregator fuses the model updates in their encrypted form which results in an encrypted aggregated model. Then the aggregator sends the encrypted aggregated model to the participating parties who can use their private key for decryption and continue with the next round of training. Only the participating parties can decrypt model data.

Supported frameworks and fusion methods

Fully Homomorphic Encryption (FHE) supports the simple average fusion method for these model frameworks:

  • Tensorflow
  • Pytorch
  • Scikit-learn classification
  • Scikit-learn regression

Before you begin

To get started with using homomorphic encryption, ensure that your experiment meets the following requirements:

  • The hardware spec must be minimum small. Depending on the level of encryption that you apply, you might need a larger hardware spec to accommodate the resource consumption caused by more powerful data encryption. See the encryption level table in Configuring the aggregator.- The software spec is fl-rt22.2-py3.10.

  • FHE is supported in Python client version 1.0.263 or later. All parties must use the same Python client version.

Requirements for the parties

Each party must:

  • Run on a Linux x86 system.
  • Configure with a root certificate that identifies a certificate authority that is uniform to all parties.
  • Configure an RSA public and private key pair with attributes described in the following table.
  • Configure with a certificate of the party issued by the certificate authority. The RSA public key must be included in the party's certificate.
Note: You can also choose to use self-signed certificates.

Homomorphic public and private encryption keys are generated and distributed automatically and securely among the parties for each experiment. Only the parties participating in an experiment have access to the private key generated for the experiment. To support the automatic generation and distribution mechanism, the parties must be configured with the certificates and RSA keys specified previously.

RSA key requirements

Table 1. RSA Key Requirements
Attribute Requirement
Key size 4096 bit
Public exponent 65537
Password None
Hash algorithm SHA256
File format The key and certificate files must be in "PEM" format

Configuring the aggregator (admin)

As you create a Federated Learning experiment, follow these steps:

  1. In the Configure tab, toggle "Enable homomorphic encryption".
  2. Choose small or above for Hardware specification. Depending on the level of encryption that you apply, you might need a larger hardware spec to accommodate the resource consumption for homomorphic encryption.
  3. Ensure that you upload an unencrypted initial model when selecting the model file for Model specification.
  4. Select "Simple average (encrypted)" for Fusion method. Click Next.
  5. Check Show advanced in the Define hyperparameters tab.
  6. Select the level of encryption in Encryption level.
    Higher encryption levels increase security and precision, and require higher resource consumption (e.g. computation, memory, network bandwidth). The default is encryption level 1.
    See the following table for description of the encryption levels:
Increasing encryption level and security and precision
Level Security Precision
1 High Good
2 High High
3 Very high Good
4 Very high High

Security is the strength of the encryption, typically measured by the number of operations that an attacker must perform to break the encryption.
Precision is the precision of the encryption system's outcomes. Higher precision levels reduce loss of accuracy of the model due to the encryption.

Connecting to the aggregator (party)

The following steps only show the configuration needed for homomorphic encryption. For a step-by-step tutorial of using homomorphic encryption in Federated Learning, see FHE sample.

To see how to create a general end-to-end party connector script, see Connect to the aggregator (party).

  1. Install the Python client with FHE with the following command:
    pip install 'ibm_watson_machine_learning[fl-rt23.1-py3.10,fl-crypto]'

  2. Configure the party as follows:

    party_config = {
        "local_training": {
            "info": {
                "crypto": {
                    "key_manager": {
                        "key_mgr_info": {
                            "distribution": {
                                "ca_cert_file_path": "path of the root certificate file identifying the certificate authority",
                                "my_cert_file_path": "path of the certificate file of the party issued by the certificate authority",
                                "asym_key_file_path": "path of the RSA key file of the party"
  3. Run the party connector script after configuration.

Additional resources

Parent topic: Federated Learning

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more