Morten Dahl, PhD, Datacho Machine Learning Engineer, Ph.D., Aarhus University, describes how to implement a convolutional neural network based on encrypted data for training and prediction.
TL;DR We chose a classic CNN deep learning model and adapted it to train and predict based on encrypted data.
Analysis of images by Convolutional Neural Networks (CNN) has been extremely popular in recent years because CNN has outperformed many other methods in image-related tasks.
A recent application based on CNN analysis of images is to detect skin cancer. Based on this application, anyone can use the mobile app to quickly take a photo of a skin lesion and get a "performance and expert" analysis (refer to YouTube). Video demo (youtu.be/toK1OSLep3s)). Access to a large number of clinical images plays a key role in the training model – this data set can be considered sensitive.
This leads to privacy issues and secure multi-party computing (MPC): How many applications are currently limited due to the lack of accessible data? In the above case, will the model be improved if anyone who uses the mobile app is allowed to contribute data? If the answer is yes, how many people volunteer to disclose information related to personal health?
Based on MPC we can potentially reduce the risk of exposure information, thereby enhancing the motivation for participation. More specifically, by training on encrypted data instead, we can not only prevent anyone from viewing personal data, but also prevent the leakage of learned model parameters. Other technologies, such as differential privacy, may also avoid revealing information from predictions, but we will not discuss them here.
This article will discuss a simplified image analysis case and introduce all the techniques that need to be used. There are some notebooks (mortendahl/privateml) on GitHub, and the main notebook provides proof of concept implementation.
In addition, I recently made a report on this article at the Paris Machine Learning meetup, and the related slides were published in ParisML17.pdf under the GitHub repository mortendahl/talks.
Many thanks to Andrew Trask, Nigel Smart, Adrià Gascón, and the OpenMined community for their inspiration and discussion on this topic.
set up
We assume that the training data set is shared by some input providers, and the data is performed by two different servers (party), and we trust that the two parties will not cooperate outside the scope specified by the protocol. For example, in practice, a server might be a virtual instance that is mastered by two different organizations in a shared cloud environment.
Input providers only need to transfer their (encrypted) training data at the beginning; after that all calculations involve only two servers, which means that it is actually possible for the input provider to use a device such as a mobile phone. After training, the model will remain encrypted by both servers, and everyone can use it to make further cryptographic predictions.
For technical reasons, we also assume that a different crypto producer (crypto producer) generates the specific raw material used in the calculation process to provide efficiency; there is a way to eliminate this extra entity, but this article does not discuss these.
Finally, in terms of security terminology, we are pursuing the typical concept commonly used in practice, that is, honest-but-curious (or passive) security, which assumes that the server will follow the protocol, but In addition to this, I will try to understand as much information as I can see. For the server, although this concept is weaker than fully malicious (or active) security, it still provides strong protection against any behavior that might break one of the servers after the calculation, regardless of the attack. What did the people do. Note that this article actually allows a small portion of the privacy of the training process to be leaked, as described later.
Image analysis based on CNN
Our use case is the classic MNIST handwritten digit recognition, which is to learn the Arabic numerals in a given image. We will use the CNN model in the Keras example as the basis.
Feature_layers = [
Conv2D(32, (3, 3), padding='same', input_shape=(28, 28, 1)),
Activation('relu'),
Conv2D(32, (3, 3), padding='same'),
Activation('relu'),
MaxPooling2D(pool_size=(2,2)),
Dropout (.25),
Flatten()
]
Classification_layers = [
Dense (128),
Activation('relu'),
Dropout (.50),
Dense (NUM_CLASSES),
Activation('softmax')
]
Model = Sequential(feature_layers + classification_layers)
Model.compile(
Loss='categorical_crossentropy',
Optimizer='adam',
Metrics=['accuracy'])
Model.fit(
X_train, y_train,
Epochs=1,
Batch_size=32,
Verbose=1,
Validation_data=(x_test, y_test))
The details of this model are not discussed here, as there are already many resources on the web that introduce their principles. But the basic idea is to first pass the image through a set of feature layers, transforming the original pixels of the input image into abstract properties that are more relevant to our classification task. These attributes are then combined by a set of classification layers to generate a probability distribution of possible numbers. The final output is usually directly the highest probability number.
As we'll see shortly, the advantage of using Keras is that we can quickly experiment on unencrypted data and see how the model itself behaves. At the same time, Keras provides a simple interface for us to follow the encryption settings. .
Based on SPDZ secure computing
After CNN is ready, let's look at the MPC. We will use the current state-of-the-art SPDZ protocol because it allows us to use only two servers and allows us to improve online performance by moving specific calculations to the offline phase.
As with other typical secure computing protocols, all calculations are performed in one domain, where the domain is represented by a prime number Q. This means that we need to encode the floating point number used by CNN to be a prime modulo integer, which imposes some restrictions on Q, which in turn affects performance.
In addition, in interactive computing such as the SPDZ protocol, in addition to the typical time complexity, communication and round complexity are also considered. Communication complexity measures the number of bytes sent in the network, a relatively slow process. Round complexity measures the number of synchronization points between two servers, which may block one of the servers and make them idle until another server catches up. Therefore, both have a great influence on the total execution time.
More importantly, however, the "native" operations of these protocols are only addition and multiplication. Dividing, comparing, etc. can be done, but in terms of their three complexity, it is more expensive. Then we will look at how to alleviate some of the problems caused by this, and here we first discuss the basic SPDZ protocol.
Tensor operation
The following code implements the PublicTensor and PrivateTensor classes for the SPDZ protocol, which represent the two servers to know the tensor of the plaintext and only the encryption value of the secret sharing form.
classPrivateTensor:
Def __init__(self, values, shares0=None, shares1=None):
Ifnot values ​​isNone:
Shares0, shares1 = share(values)
Self.shares0 = shares0
Self.shares1 = shares1
Def reconstruct(self):
returnPublicTensor(reconstruct(self.shares0, self.shares1))
Def add(x, y):
If type(y) isPublicTensor:
Shares0 = (x.values ​​+ y.shares0) % Q
Shares1 = y.shares1
returnPrivateTensor(None, shares0, shares1)
If type(y) isPrivateTensor:
Shares0 = (x.shares0 + y.shares0) % Q
Shares1 = (x.shares1 + y.shares1) % Q
returnPrivateTensor(None, shares0, shares1)
Def mul(x, y):
If type(y) isPublicTensor:
Shares0 = (x.shares0 * y.values) % Q
Shares1 = (x.shares1 * y.values) % Q
returnPrivateTensor(None, shares0, shares1)
If type(y) isPrivateTensor:
a, b, a_mul_b = generate_mul_triple(x.shape, y.shape)
Alpha = (x - a).reconstruct()
Beta = (y - b).reconstruct()
Return alpha.mul(beta) + \
Alpha.mul(b) + \
A.mul(beta) + \
A_mul_b
The code is basically straightforward. Of course, there are some technical details, see the notebook that is attached to this article.
The basic tool functions used in the above code:
Def share(secrets):
Shares0 = sample_random_tensor(secrets.shape)
Shares1 = (secrets - shares0) % Q
Return shares0, shares1
Def reconstruct(shares0, shares1):
Secrets = (shares0 + shares1) % Q
Return secrets
Def generate_mul_triple(x_shape, y_shape):
a = sample_random_tensor(x_shape)
b = sample_random_tensor(y_shape)
c = np.multiply(a, b) % Q
returnPrivateTensor(a), PrivateTensor(b), PrivateTensor(c)
Adaptation model
While it is possible in principle to safely calculate any function based on our existing model, what needs to be done in practice is to consider model variants that are more friendly to MPC, and encryption protocols that are more friendly to the model. In a slightly more vivid way, we often need to open two black boxes to better fit the two technologies.
The root cause of this approach is that some operations are surprisingly expensive under cryptographic operations. As we mentioned before, addition and multiplication are relatively cheap, while comparisons and divisions based on private denominators are not. For this reason, we made some changes to the model to avoid this problem.
Many of the changes involved in this section and their corresponding performance can be found in the accompanying Python notebook.
Optimizer
The first is the optimizer: although many implementations have chosen Adam based on its efficiency, Adam involves taking the square root of the private value and using the private value as the denominator in the division. Although it is possible to theoretically perform these calculations safely, it can be a significant bottleneck in performance in practice, so you need to avoid using Adam.
A simple remedy is to use the momentum SGD (momentum SGD) optimizer, which may mean longer training time, but only a simple operation.
Model.compile(
Loss='categorical_crossentropy',
Optimizer=SGD(clipnorm=10000, clipvalue=10000),
Metrics=['accuracy'])
There is also an extra pit, and many optimizers use clipping to avoid the gradient becoming too small or too large. Tailoring requires a comparison of private values, which is a somewhat expensive operation under encryption settings, so our goal is to avoid using clipping (in the above code, we added boundaries).
Network layer
When it comes to comparison, ReLU and the largest pooling layer also have this problem. CryptoNet replaces the former with a squared function, replacing the latter with an average pooling, and SecureML implements a ReLU-like activation function (however, this adds complexity, and this article intends to avoid this in order to keep it simple). Therefore, we use the high-order sigmoid activation function and the average pooling layer here. Note that the average pooling also uses division, but this time the denominator is a public value, so the division is simply a reciprocal of the public value, followed by a multiplication.
Feature_layers = [
Conv2D(32, (3, 3), padding='same', input_shape=(28, 28, 1)),
Activation('sigmoid'),
Conv2D(32, (3, 3), padding='same'),
Activation('sigmoid'),
AveragePooling2D(pool_size=(2,2)),
Dropout (.25),
Flatten()
]
Classification_layers = [
Dense (128),
Activation('sigmoid'),
Dropout (.50),
Dense (NUM_CLASSES),
Activation('softmax')
]
Model = Sequential(feature_layers + classification_layers)
Simulations show that this change requires us to increase the number of epochs and slow down the training accordingly. Other options for learning rate or momentum may improve this.
Model.fit(
X_train, y_train,
Epochs=15,
Batch_size=32,
Verbose=1,
Validation_data=(x_test, y_test))
The rest of the layers are handled very well. Dropout and flatten do not care whether it is encrypted or non-encrypted. The dense layer and convolution layer are matrix dot products, which only require basic operations.
Softmax and loss function
Under the encryption setting, the final softmax layer also brings complexity to the training, because we need to perform an exponential operation with a private value index and a normalization based on private denominator division.
Although both are possible, we have chosen a simpler approach that allows one server to expose the likelihood of a predictive classification for each training sample, which then calculates the results based on the exposure values. This of course leads to a privacy breach, which may or may not form an acceptable risk.
A heuristic improvement is to transform a similar vector before exposing any value, thereby hiding which vector corresponds to which vector. However, this may not have any effect. For example, “health†often means a tight distribution, while “disease†often means a distribution of stretch.
Another solution is to introduce a third server that specializes in such small calculations. Any other information about the training data will be invisible to the server, so the label and sample data cannot be correlated. Although there is still some information leaking in this way, this number is difficult to reason.
Finally, we can replace such a one-to-many method with a one-to-one method, such as using sigmoid. As mentioned earlier, this allows us to calculate the predictions completely without decryption. However, we still need to calculate the loss, we may also consider using a different loss function.
Note that when using the trained network for prediction, the problems mentioned here do not exist, because there is no loss to calculate, and the server can directly skip the softmax layer, let the predicted receiver calculate the corresponding value: for the receiver In fact, this is just a question of how to interpret the value.
Migration learning
So far, it seems that we have been able to actually train the model as it is and get good results. However, according to CNN's practice, we can use migration learning to significantly speed up the training process; in fact, to a certain extent, "very few people train their own convolutional networks from scratch because they don't have enough data," "It is always recommended to use migration learning in practice", which is a well-known fact.
In our setting here, the specific application of migration learning may be that training is divided into two phases: a pre-training phase using non-sensitive public data and a tuning phase using sensitive private data. For example, in the case of detecting skin cancer, the researcher may choose to pre-train on a public photo collection and then request volunteers to provide additional photos to improve the model.
In addition to the difference in cardinality, the subjects of the two data sets may also be different, because CNN has a tendency to first decompose the body into meaningful sub-parts, identifying which parts are what can be migrated. In other words, this technique is powerful enough that pre-training can be performed on different types of images.
Going back to our specific character recognition use case, we can make 0-4 a "public" image and 5-9 as a "private" image. Instead, let az be a "public" image, and 0-9 as a "private image" would look unreasonable.
Pre-training on public data sets
In addition to the overhead of avoiding cryptographic data training on public datasets, pre-training on public datasets allows us to use more advanced optimizers. For example, here we can go back and use the Adam optimizer to train the image to speed up the training. In particular, we can reduce the number of epoch required.
(x_train, y_train), (x_test, y_test) = public_dataset
Model.compile(
Loss='categorical_crossentropy',
Optimizer='adam',
Metrics=['accuracy'])
Model.fit(
X_train, y_train,
Epochs=1,
Batch_size=32,
Verbose=1,
Validation_data=(x_test, y_test))
Once we are satisfied with the results of the pre-training, the server can directly share the model parameters and start training private data sets.
Tuning on private data sets
When we started the cryptographic training, the parameters of the model were “on the wayâ€, so we can expect that there is no need for so many epochs. As mentioned earlier, migration learning has another advantage. Identifying subcomponents tends to occur at the bottom of the network and in some cases may be used as is. Therefore, we now freeze the parameters of the feature layer and concentrate on training the classification layer.
For layer in feature_layers:
Layer.trainable = False
However, we still need to pass all the private training samples forward through these layers; the only difference is that we skip these layers in the backpropagation step, so the parameters we need to train are reduced.
The next training is the same as before, except that the lower learning rate is now used:
(x_train, y_train), (x_test, y_test) = private_dataset
Model.compile(
Loss='categorical_crossentropy',
Optimizer=SGD(clipnorm=10000, clipvalue=10000, lr=0.1, momentum=0.0),
Metrics=['accuracy'])
Model.fit(
X_train, y_train,
Epochs=5,
Batch_size=32,
Verbose=1,
Validation_data=(x_test, y_test))
In the end, we reduced the epoch number from 25 to 5.
Pretreatment
There are a handful of pre-processing optimizations that can be applied, but we will not further optimize them here.
The first optimization is to transfer the calculation of the frozen layer to the input provider so that the shared with the server will be the flat layer instead of the pixels of the image. In this case, these layers perform feature extraction, potentially allowing us to use more powerful layers. However, if we want to keep the model proprietary, this adds significant complexity because the parameters need to be distributed to the client in some form.
Another typical method of speeding up training is to first apply dimensionality reduction techniques such as principal component analysis. The BSS+'17 encryption setting uses this method.
Adaptation protocol
After looking at the model, let's look at the protocol: Again, as we're about to see, understanding what we need to do can help speed things up.
In particular, many calculations can be transferred to the cryptographic provider, the original material generated by the cryptographic provider being independent of the private input, and somewhat even independent of the model. Therefore, its calculations can be completed in large quantities at a convenient time.
Recall that it was necessary to optimize round complexity and communication complexity at the same time, and the extensions proposed here are often designed to optimize both, but at the cost of additional local calculations. Therefore, actual experiments are needed to verify their benefits in specific situations.
Dropout
Starting with the simplest type of network layer, we noticed that nothing specific to security computing happened at this level, this layer just made sure that the two servers agreed to discard which values ​​in each training iteration. This can be achieved by directly agreeing to a seed value.
Average pooling
The average pooled forward propagation requires only one accumulation and the subsequent division based on the public denominator. Therefore, it can be achieved by multiplying by a public value: since the denominator is public, we can easily find its reciprocal and then directly multiply and truncate. Similarly, backpropagation is nothing more than scaling, so the propagation in both directions is entirely local.
Dense layer
Both the forward propagation and the back propagation of the dense layer require a dot product operation, which can of course be achieved by classical multiplication and addition. If we want to calculate the dot product dot(x, y) for matrices x and y of shape (m, k) and (k, n), then this would require m * n * k multiplications, meaning we need communication The same number of masked values. Although these can be sent concurrently, we only need one round. If we can use another pre-processed triple, we can reduce the communication cost by an order of magnitude.
For example, the second dense layer of our model computes the dot product of the two matrices (32, 128) and (128, 5). Using the typical method, each batch needs to send a 32 * 5 * 128 == 22400 masked value, but using the preprocessed triples described below, we only need to send 32 * 128 + 5 * 128 == The value after the 4736 mask has almost a 5x improvement. The effect of the first dense layer is even better, with an improvement of about 25 times.
The trick is to ensure that the mask for each private value in the matrix is ​​sent only once. To achieve this, we need a triple (a, b, c), where a and b are randomly shaped matrices, and c satisfies c == dot(a, b).
Def generate_dot_triple(x_shape, y_shape):
a = sample_random_tensor(x_shape)
b = sample_random_tensor(y_shape)
c = np.dot(a, b) % Q
returnPrivateTensor(a), PrivateTensor(b), PrivateTensor(c)
Given such a triple, we can instead communicate the values ​​of alpha = x - a and beta = y - b, and then get dot(x, y) by local calculation.
classPrivateTensor:
...
Def dot(x, y):
If type(y) isPublicTensor:
Shares0 = x.shares0.dot(y.values) % Q
Shares1 = x.shares1.dot(y.values) % Q
returnPrivateTensor(None, shares0, shares1)
If type(y) isPrivateTensor:
a, b, a_dot_b = generate_dot_triple(x.shape, y.shape)
Alpha = (x - a).reconstruct()
Beta = (y - b).reconstruct()
Return alpha.dot(beta) + \
Alpha.dot(b) + \
A.dot(beta) + \
A_dot_b
The security of using this triple is dependent on the security of the triple multiplication: the masked value of the communication perfectly hides the values ​​of x and y, and c is a separate new shared value, which ensures As a result, no information about its composition can be disclosed.
Note that SecureML uses such triples, and SecureML also provides the technology to generate triples by the server without the help of an encryption provider.
convolution
Like dense layers, convolution can be thought of as a series of scalar multiplications or matrix multiplications, although the latter first needs to extend the tensor of the training samples to a matrix with many redundancy. Not surprisingly, both lead to increased communication costs, which can be improved by introducing another triple.
For example, the first convolution layer uses 32 cores of shape (3, 3, 1) to map tensors of shape (m, 28, 28, 1) to (m, 28, 28, 32). Tensor (regardless of the offset vector). For the batch size m == 32, if we only use scalar multiplication, this means 7,225,344 communication elements, and if we use matrix multiplication, it is 226,080 communication elements. However, since only a total of (32*28*28) + (32*3*3) == 25,376 private values ​​are involved (the offset vectors are also not calculated because they only need to be added), we see that there are about 9 times The extra overhead. In other words, each private value is masked and sent several times. Based on a new triple, we can eliminate this overhead and save on communication costs: for 64-bit elements, this means that the cost per bundle is 200KB instead of the corresponding 1.7MB and 55MB.
The triples (a, b, c) we need here are similar to those used in the dot product, a and b have the shape of the matching input, ie (m, 28, 28, 1) and (32, 3, 3, 1 ), and c matches the output shape (m, 28, 28, 32).
Sigmoid activation
As we have done before, we can use a 9-term polynomial to approximate the sigmoid activation function to a sufficient degree of precision. Calculating the value of this polynomial for the private value x requires computing a series of powers of x, which can of course be done by a series of multiplications - but this means many rounds and a corresponding amount of communication.
Instead, we can use a new triple, which allows us to calculate all the required powers in one round. The length of these "triples" is not fixed and is equal to the highest index. For example, the corresponding triplet contains independent sharing of a and a**2, while the corresponding cubic triple contains a, a**2. A**3 independent sharing.
Once we have the power values ​​of these x, the calculus polynomial with the public coefficients is just the local weighted sum. The security of this calculation is also derived from the fact that all the powers in the triple are independently shared.
Def pol_public(x, coeffs, triple):
Powers = pows(x, triple)
Return sum( xe * ce for xe, ce in zip(powers, coeffs) )
As before, we will encounter pits with fixed-point precision, that is, the higher precision of the power requires more space: x**n has n times x precision, and we want to make sure it is not in Q The module overflows so that we cannot decode it correctly. We can temporarily switch over when we calculate the power by introducing a domain P that is large enough, at the cost of an additional two rounds of communication.
Experiments in practice will show whether it is better to keep Q using more multiplication rounds, or to pay for switching large numbers and arithmetic. In particular, the former looks better for low-order polynomials.
Proof of concept
There is a proof of concept implementation without a network that can be experimented and reproduced. The implementation has not yet been completed, and the current code supports training a new classifier based on cryptographic features, but does not support extracting features from encrypted images. In other words, it assumes that the input provider runs the image itself in the feature extraction layer and then sends the result to the server in encrypted form; therefore, the weight of the corresponding part of the model is not currently private. Future versions will handle this so that the feature layer can run on encrypted data, directly based on image training and prediction.
From pond.nn importSequential, Dense, Sigmoid, Dropout, Reveal, Softmax, CrossEntropy
From pond.tensor importPrivateEncodedTensor
Classifier = Sequential([
Dense (128, 6272),
Sigmoid(),
Dropout (.5),
Dense (5, 128),
Reveal(),
Softmax()
])
Classifier.initialize()
Classifier.fit(
PrivateEncodedTensor(x_train_features),
PrivateEncodedTensor(y_train),
Loss=CrossEntropy(),
Epochs=3
)
The code is divided into several Python notebooks with precomputed weights, so you can skip some steps:
The first notebook uses Keras to handle pre-training on public data and generate models for feature extraction. You can skip this step and use the precomputed weights in the warehouse instead.
The second notebook applies the above model to feature extraction on private data to generate features for training the new encrypted classifier. Future versions will first encrypt the data. This step cannot be omitted because the extracted data is too large.
The third notebook accepts the extracted features and trains a new encryption classifier. This is by far the most expensive step and can be skipped by using precomputed weights in the repository.
Finally, the fourth notebook uses a new classifier to perform cryptographic prediction on the new image. Again, feature extraction is currently unencrypted.
To run the above code, you need to clone the repository first.
$ git clone https://github.com/mortendahl/privateml.git && \
Cd privateml/image-analysis/
Installation dependence
$ pip3 install jupyter numpy tensorflow keras h5py
Running the notebook
$ jupyter notebook
idea
As always, when the previous ideas and questions were answered, a new batch of new ones had already been there.
Promotion triad
When trying to reduce communication, one might wonder how much work can be transferred to the pre-processing stage by using additional triples.
It has been mentioned several times before (and also the proposition of papers such as BCG+'17), and we usually seek to ensure that each private value only sends a mask once. So, if we, for example, calculate dot(x, y) and dot(x, z) at the same time, then it would make sense to have a triple (r, s, t, u, v), where r is used for mask x, s is used for mask y, u is used for mask z, and t and u are used for calculation results. For example, this pattern occurs during training, and the values ​​calculated during forward propagation can sometimes be buffered and reused in backpropagation.
However, perhaps more importantly, we only make predictions based on a model, that is, based on fixed private weights. In this case, we want to mask only the weights once and then reuse them for each prediction. Doing this means that the number of masks and communications is proportional to the input vector through the model, rather than the input vector and weight, as is the case with papers such as JVC'18. More generally, ideally, we want communication to be only proportional to the value of the change, which can be achieved by a special triad (in the sense of installment).
Finally, in principle, the triple can perform more functions, such as calculating the dense layer and its activation function in a round of communication, but the biggest obstacle seems to be scalability issues, including triple storage. , and the amount of calculations that need to be performed in the reorganization step, especially when dealing with tensors.
Activation function
A natural question is which other typical activation functions are more efficient in an encrypted configuration. As mentioned earlier, SecureML uses ReLU by temporarily switching to garbled circuits, while CryptoDL gives low-order polynomial approximations of sigmoid, ReLU, and Tanh (using improved Chebyshev polynomials).
It may be necessary to consider simpler atypical activation functions, such as the squares used by CryptoNet, if simplifying computation and communication is paramount.
Garbled circuit
As mentioned earlier, by using garbled circuits to safely calculate more advanced activation functions, in fact, garbled circuits can be used for larger parts, including as a primary means of secure computing, as done by DeepSecure et al.
Compared to technologies such as SPDZ, the advantage of garbled circuits is that only a fixed number of communication rounds are used. The downside is that operations often occur on bytes rather than relatively large domain elements, which means more calculations are involved.
Precision
A large number of studies around federated learning involve gradient compression to reduce communication costs. Close to what we set is BMMP'17, based on the application of homomorphic encryption to deep learning in quantum applications, and even unencrypted production environment-ready systems often consider this technique to improve learning performance.
Floating point arithmetic
Above we use fixed-point numbers to encode real numbers into finite field elements, while unencrypted deep learning is usually encoded using floating-point numbers. As shown by the reference implementations of ABZS'12 and SPDZ, it is also possible to use floating-point encoding under cryptographic settings, and it is clear that floating-point encoding is more advantageous in some operations.
GPU
For performance reasons, today's deep learning is usually done on the GPU, so it is natural to think of whether similar acceleration methods can be applied to MPC computing. Garbled circuits already have this work, and this seems to be less popular in secure sharing settings like SPDZ.
The biggest problem here may be the maturity and practicality of arbitrary precision arithmetic on the GPU (however, there are already some studies in this area), because the calculations on larger domain elements (such as 64 bits) Need this. However, there are two things to keep in mind here: first, although we calculate domain elements that are larger than those that are natively supported, they are still bounded (modulo); second, we can do our on the ring (not on the domain) Safe calculation.
IP67 Aluminium Case Led Driver
IP67 Aluminum Case Led Driver
A comprehensive range of IP Rated/Waterproof AC/DC long life span led driver suitable for use in a range of Lighting applications, including Street lighting, Architectural Lighting, Task Lighting, Medical Lighting, Transportation Lighting, Entertainment Lighting, Moving Signs, Safety & Security Lighting and Outdoor Area Lighting Applications. LED Lighting power supplies & LED Drivers in constant voltage or constant current models IP67 rated. waterproof led driver,48V led driver.
Advantage: High Power Led Driver, safety and stability, high pressure test protection, short circuit protection, anti-lightning strong, safe level Certification, UL TUV FCC, etc. certification.
Parameter:
Input ovltage:100-277V
Output voltage(different range):24-38V/30-42V/36-54V/45-76V/72-143V
Current:100-5000mA
Power factor:>0.95
THD:<15%
Dimming: 0-10V/PWM/RX
Housing: Aluminum
What's the benefits of Fahold Linear Lights driver?
- Standard Linear Lighting
- Cost-effective light-line solution for industry,commercial and other applications
- Good quality of light with 5years warranty, EMC, EMI, safety SELV design
- Easy to order and install,requiring less time,reducing packaging waste and complexity
- Flexible solution
Question 1:Are you a factory or a trading company?
Answer: We are a factory.
Question 2: Payment term?
Answer: 30% TT deposit + 70% TT before shipment,50% TT deposit + 50% LC balance, Flexible payment
can be negotiated.
Question 3: What's the main business of Fahold?
Answer: Fahold focused on LED controllers and dimmers from 2010. We have 28 engineers who dedicated themselves to researching and developing LED controlling and dimming system.
Question 4: What Fahold will do if we have problems after receiving your products?
Answer: Our products have been strictly inspected before shipping. Once you receive the products you are not satisfied, please feel free to contact us in time, we will do our best to solve any of your problems with our good after-sale service.
Waterproof Led Driver,Long Life Span Led Driver,48V Led Driver,led constant current driver,driver for led light
ShenZhen Fahold Electronic Limited , https://www.fahold.net