The supply chain is a system that comprises multiple entities such as suppliers, manufacturers, carriers, customers, etc. Data sharing safely between these entities is vital for the success of a supply chain. This paper proposes a secure supply chain data sharing scheme based on blockchain and IPFS, including decentralized architecture, data management of capability-based access control mechanism and message acquisition mechanism based on the publish-subscribe model. Further more, a BIZi storage method is proposed to enhance the reliability of the above solution and streamline the backup of data by combining smart contract, the Bitswap protocol of IPFS and the Zigzag code. Finally, a prototype of a supply chain platform is implemented based on the Ethereum. The performance test shows that BIZi can save more storage resources at the same fault tolerant rate.
As an important step in the IoT workflow, IoT data aggregation transmits, filters, analyzes and synthesizes data collected by IoT devices such as sensors to help people make decisions and evaluate. However, the traditional IoT data aggregation model relies on a central data aggregator, such as a third-party cloud server, and uses traditional encryption technology to process data, which may face problems such as original data leakage and calculation errors, and it is difficult to ensure that data participants can safely participate in the aggregation. Due to its homomorphic characteristics and performance improvements in recent years, homomorphic encryption technology makes it possible to use homomorphic encryption technology to protect data privacy during data transmission and calculation. In addition, as a non-tamperable ledger technology, the blockchain can be used to store evidence, specifically to store the promises made by the commitment mechanism, and complete the verification of data consistency and correctness.
KEYWORDS: Data modeling, Education and training, Machine learning, Performance modeling, Data privacy, Defense and security, Process modeling, Mathematical optimization, Deep learning, Telecommunications
Federated learning is a new privacy protection framework for machine learning. The central server aggregates multiple participants to decentralized optimized parameters, then distributes the generated model to the client, and finally converges the global model. The model obtained by performance approaching centralized data training is trained under the condition that the data is not leave local. However, many studies have shown that this centralized federation system is vulnerable to confidentiality attacks by "honest but curious" attackers, using the gradient parameter information transmitted during federation model training to carry out reconstruction attacks or inference attacks, obtain the privacy data of participants or deduce some member information, which poses a severe challenge to the privacy protection of federated learning. In this paper, a hybrid defense strategy based on confusion self-encoder combined with localized differential privacy is proposed. On the one hand, the labels of the participants' local data are confused through the self-encoder network, so as to cut off the relationship between the gradient information and the original data. On the other hand, the localized differential privacy mechanism is used to disturb the transmitted gradient parameter information, and a model performance loss constraint mechanism is designed to reduce the impact of noise addition on the model performance. Experiments show that the hybrid defense strategy proposed in this paper can effectively resist reconstruction attacks and inference attacks in the process of federated learning model training, and achieve a better balance among computing overhead, model performance and privacy security.
End-to-end driving behavior decision-making is a research hotspot in the field of driverless driving. This paper studies the end-to-end lane change decision control based on the PPO (Proximal Policy Optimization) deep reinforcement learning algorithm. First, an end-to-end decision control model based on PPO algorithm is established. The model uses the information perceived from the environment as the input state and outputs the control quantity (acceleration, braking, steering). Training and verification in the driving environment under the TORCS (The Open Racing Car Simulator) platform show that the model can achieve end-to-end lane change driving behavior decision-making. Finally, compared with the DDPG (Deep Deterministic Policy Gradient) model, which is also a deep reinforcement learning method, the experimental results show that the PPO lane change decision control model has a faster convergence rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.