S.No | Project Code | Project Title | Abstract |
---|---|---|---|
CLOUD COMPUTING |
|||
1 | VTJCCO1 | Privacy-Preserving and Trusted Keyword Search for Multi-Tenancy Cloud | |
2 | VTJCC02 | Attribute-Based Searchable Encryption with Forward Security for Cloud-Assisted IoT | |
3 | VTJCC03 | PoEDDP-A Fast RSA-Based Proof of Possession Accumulator of Dynamic Data on the Cloud | |
4 | VTJCC04 | EDASVIC: Enabling Efficient and Dynamic Storage Verification for Clouds of Industrial Internet Platforms | |
5 | VTJCC05 | Group Key Management and Sharing Protocol in Cloud Computing Environment | |
6 | VTJCC06 | Achieving Secure, Verifiable, and Efficient Boolean Keyword Searchable Encryption for Cloud Data Warehouse | |
7 | VTJCC07 | Optimal Resource Allocation Using Genetic Algorithm in Container-Based Heterogeneous Cloud | |
8 | VTJCC08 | Attribute-Based Management of Secure Kubernetes Cloud Bursting | |
9 | VTJCC09 | PRBFPT: A Practical Redactable Blockchain Framework With a Public Trapdoor | |
10 | VTJCC10 | I/O Causality Based In-Line Data Deduplication for Non-Volatile Memory Enabled Storage Systems | |
11 | VTJCC11 | Scalable Data Partitioning Techniques for Distributed Data Processing in Cloud Environments | |
12 | VTJCC12 | Dynamic AES Encryption and Blockchain Key Management: A Novel Solution for Cloud Data Security | |
13 | VTJCC13 | Heterogeneous Reconfigurable Accelerator for Homomorphic Evaluation on Encrypted Data | |
14 | VTJCC14 | Optimized Encryption-Integrated Strategy for Containers Scheduling and Secure Migration in Multi-Cloud Data Centers | |
15 | VTJCC15 | DEDUCT: A Secure Deduplication of Textual Data in Cloud Environments | |
16 | VTJCC16 | Cloud-assisted Privacy-Preserving Spectral Clustering Algorithm within a Multi-User Setting | |
17 | VTJCC17 | Efficient Privacy-Friendly and Flexible Wearable Data Processing With User-Centric Access Control |
1 | VTJDM01 | Dynamic Searchable Symmetric Encryption With Strong Security and Robustness | |
---|---|---|---|
2 | VTJDM02 | A Novel Proxy Re-Encryption Technique for Secure Data Sharing in Cloud Environment | |
3 | VTJDM03 | Comment on "Expressive Public-Key Encryption with Keyword Search: Generic Construction from KP-ABE and an Efficient Scheme Over Prime-Order Groups" | |
4 | VTJDM04 | An Implementation for Secure Data Deduplication on End-to-End Encrypted Documents | |
5 | VTJDM05 | Prediction of Chronic Kidney Disease - A Machine Learning Perspective | |
6 | VTJDM06 | Enhancing the prediction of employee turnover with knowledge graphs and explainable AI | |
7 | VTJDM07 | A Reinforcement Learning Based Recommendation System to Improve Performance of Students in Outcome Based Education Model | |
8 | VTJDM08 | Novel Transformer Based Contextualized Embedding and Probabilistic Features for Depression Detection From Social Media | |
9 | VTJDM09 | A Secure Medical Data Sharing Framework for Fight Against Pandemics Like Covid-19 by Using Public Blockchain | |
10 | VTJDM10 | Secure Sharing Architecture of Personal Healthcare Data Using Private Permissioned Blockchain for Telemedicine | |
11 | VTJDM11 | Investigating Gender and Age Variability in Diabetes Prediction: A Multi-Model Ensemble Learning Approach | |
12 | VTJDM12 | Blockchain-Enabled Framework for Transparent Land Lease and Mortgage Management | |
13 | VTJDM13 | Enhancing Gun detections with transfer learning and YAMNet Audio classification | |
14 | VTJDM14 | A Novel Early Detection and Prevention of Coronary Heart Disease Framework Using Hybrid Deep Learning Model and Neural Fuzzy Inference System | |
15 | VTJDM15 | Scalable and Popularity-Based Secure Deduplication Schemes With Fully Random Tags | |
16 | VTJDM16 | IPO-PEKS: Effective Inner Product Outsourcing Public Key Searchable Encryption From Lattice in the IoT | |
17 | VTJDM17 | Enhancing IoT Security and Efficiency: A Blockchain Assisted Multi-Keyword Searchable Encryption Scheme | |
18 | VTJDM18 | Medical Image Encryption through Chaotic Asymmetric Cryptosystem | |
1 | VTJNW01 | Intelligent SLA Selection Through the Validation Cloud Broker System | |
---|---|---|---|
2 | VTJNW02 | Parallel Enhanced Whale Optimization Algorithm for Independent Tasks Scheduling on Cloud Computing | |
3 | VTJNW03 | Design Aspects of Decentralized Identifiers and Self-Sovereign Identity Systems | |
4 | VTJNW04 | Secure and Fine-Grained Access Control With Optimized Revocation for Outsourced IoT EHRs With Adaptive Load-Sharing in Fog-Assisted Cloud Environment | |
5 | VTJNW05 | Incentive-Vacation Queueing for Edge Crowd Computing | |
6 | VTJNW06 | Concise and Efficient Multi-Identity Fully Homomorphic Encryption Scheme | |
7 | VTJNW07 | Digitalized and Decentralized Open-Cry Auctioning: Key Properties, Solution Design, and Implementation | |
8 | VTJNW08 | ACO-Based Scheme in Edge Learning NOMA Networks for Task-Oriented Communications | |
9 | VTJNW09 | Non-Fungible Token Enhanced Blockchain based Online Social Network | |
10 | VTJNW10 | REHREC:Review Effected Heterogeneous Information Network Recommendation System | |
11 | VTJNW11 | Federated Learning for Decentralized DDoS Attack Detection in IoT Networks. | |
12 | VTJNW12 | Innovative Energy-Efficient Proxy Re-Encryption for Secure Data Exchange in Wireless Sensor Networks |
1 | VTJNS01 | Leakage-Resilient Anonymous Heterogeneous Multi-Receiver Hybrid Encryption in Heterogeneous Public-Key System Settings | |
---|---|---|---|
2 | VTJNS02 | Formal Verification of Data Modifications in Cloud Block Storage Based on Separation Logic | |
3 | VTJNS03 | An Efficient IoT-Fog-Cloud Resource Allocation Framework Based on Two-Stage Approach | |
4 | VTJNS04 | Multi-Smart Meter Data Encryption Scheme Based on Distributed Differential Privacy | |
5 | VTJNS05 | Multi-grained Trace Collection, Analysis, and Management of Diverse Container Images | |
6 | VTJNS06 | AES Security Improvement by Utilizing New Key-Dependent XOR Tables | |
7 | VTJNS07 | Ethereum Blockchain Framework Enabling Banks to Know Their Customers | |
8 | VTJNS08 | An Efficient Task Scheduling for Cloud Computing Platforms Using Energy Management Algorithm: A Comparative Analysis of Workflow Execution Time | |
9 | VTJNS09 | AASSI: A Self-Sovereign Identity Protocol With Anonymity and Accountability | |
10 | VTJNS10 | A Comparative Analysis of Metaheuristic Techniques for High Availability Systems | |
11 | VTJNS11 | MS-FL: A Federated Learning Framework Based on Multiple Security Strategies |
1 | VTJBC01 | PASSP: A Private Authorization Scheme Oriented Service Providers | |
---|---|---|---|
2 | VTJBC02 | PEEV: Parse Encrypt Execute Verify—A Verifiable FHE Framework | |
3 | VTJBC03 | An Inner Product Predicate-Based Medical Data-Sharing and Privacy Protection System | |
4 | VTJBC04 | Optimizing Industrial IoT Data Security Through Blockchain-Enabled Incentive-Driven Game Theoretic Approach for Data Sharing | |
5 | VTJBC05 | Blockchain-Assisted Hierarchical Attribute-Based Encryption Scheme for Secure Information Sharing in Industrial Internet of Things | |
6 | VTJBC06 | Secure Reviewing and Data Sharing in Scientific Collaboration: Leveraging Block chain and Zero Trust Architecture | |
7 | VTJBC07 | Blockchain Based KYC Model for Credit Allocation in Banking | |
8 | VTJBC08 | Public Edu Chain: A Framework for Sharing Student-Owned Educational Data on Public Blockchain Network | |
9 | VTJBC09 | Blockchain-Based Logging to Defeat Malicious Insiders: The Case of Remote Health Monitoring Systems | |
10 | VTJBC10 | Blockchain Based an Efficient and Secure Privacy Preserved Framework for Smart Cities | |
11 | VTJBC11 | Research on the privacy protection technology of blockchain spatiotemporal big data | |
12 | VTJBC12 | A New Identity Authentication and Key Agreement Protocol Based on Multi-Layer Blockchain in Edge Computing | |
13 | VTJBC13 | Stub Signature-Based Efficient Public Data Auditing System Using Dynamic Procedures in Cloud Computing | |
14 | VTJBC14 | Effective Identity Authentication Based on Multi-attribute Centers for Secure Government Data Sharing |
Cloud service models intrinsically cater to multiple tenants. In current multi-tenancy model, cloud service providers isolate data within a single tenant boundary with no or minimum cross-tenant interaction. With the booming of cloud applications, allowing a user to search across tenants is crucial to utilize stored data more effectively. However, conducting such a search operation is inherently risky, primarily due to privacy concerns. Moreover, existing schemes typically focus on a single tenant and are not well suited to extend support to a multi-tenancy cloud, where each tenant operates independently. In this article, to address the above issue, we provide a privacy preserving, verifiable, accountable, and parallelizable solution for “privacy-preserving keyword search problem” among multiple independent data owners. We consider a scenario in which each tenant is a data owner and a user’s goal is to efficiently search for granted documents that contain the target keyword among all the data owners. We first propose a verifiable yet accountable keyword searchable encryption (VAKSE) scheme through symmetric bilinear mapping. For verifiability, a message authentication code (MAC) is computed for each associated piece of data. To maintain a consistent size of MAC, the computed MACs undergo an exclusive OR operation. For accountability, we propose a keyword-based accountable token mechanism where the client’s identity is seamlessly embedded without compromising privacy. Furthermore, we introduce the parallel VAKSE scheme, in which the inverted index is partitioned into small segments and all of them can be processed synchronously. We also conduct formal security analysis and comprehensive experiments to demonstrate the data privacy preservation and efficiency of the proposed schemes, respectively.
Cipher text-Policy Attribute-Based Searchable Encryption (CP-ABSE) is one of the most suitable encryption mechanisms in cloud environments for its fine-grained access structure and keyword retrieval capability over the cipher text. However, in the CP-ABSE schemes, guaranteeing the forward security of the outsourced cloud data and securely deleting those no longer needed data without relying on the cloud are challenging problems. To handle such challenges, we propose a Puncturable CP-ABSE (PunCP-ABSE) scheme that achieves self-controlled data deletion with a fine-grained access structure under the searchable mechanism. The data owner punctures the trapdoor to accomplish the data deletion. Then, the deletion process does not need to communicate with a trusted third party and can guarantee forward security. After the puncturation, the cloud server can no longer search for the corresponding cipher text. Furthermore, we prove the Pun-CP-ABSE scheme is secure against the Chosen-Plaintext Attack (CPA) and Chosen-Keyword Attack (CKA). We have also implemented the Pun-CP-ABSE scheme to show its efficiency and feasibility.
The ease of usage and the convenience of cloud computing come with considerable responsibility. The latter, consists of carefully addressing different security aspects of this technology. The integrity and availability of the outsourced data constitute essential considerations for adopters’ final decisions. However, the most critical factor is the efficiency of integrity checks, which must prioritize restricted-resource data owners without affecting the performance of the Cloud Service Provider. This paper proposes a secure scheme, called Proof of Exponentiation of Dynamic Data Possession PoEDDP based on RSA-Accumulators. The proof of concept demonstrates that this scheme is 20 times faster compared to other RSA-based cryptographic accumulator schemes. It could be improved to achieve great results with proper optimizations on the larger integer multiplication side.
Industrial Internet platforms (IIP) can provide many intelligent services based on the industrial big data stored in clouds. However, the vulnerability of cloud storage can cause data corruption, demanding verifying its integrity. Unfortunately, existing cloud storage verification approaches cannot be directly applied to IIP, since they pose heavy computational burdens on the edge side. In this work, we propose an efficient and dynamic storage verification scheme Edasvic for cloud storage in the IIP. We adopt the polynomial commitment to build an efficient homomorphic authenticator, and further design an authenticator accumulator, which can be efficiently generated with limited computational overheads. In addition, we integrate the dynamic information into the authenticator accumulator to support data dynamics. The security of Edasvic is analyzed under the random oracle model. We conduct extensive experiments to evaluate the performance of Edasvic and compare it with the state-of-the-art approaches. Experimental results affirm that Edasvic is superior to existing solutions in terms of computational efficiency.
In cloud computing environments, secured group communication is vital for protecting shared resources and ensuring authorized access. Group key management and sharing protocols play a critical role in maintaining the data security by enabling efficient and secure key distribution among group members. Cloud computing is a model wherein resources on the Internet establish a resource pool and may be dynamically assigned to diverse applications as well as services. Ensuring security in cloud storage is not easy. As data on cloud is beyond the control area of authentic participants, shared data should be made usable on demand of authentic users. Shared data are vulnerable to misplaced or erroneously altered by Cloud Service Provider (CSP) or attackers. Defending data from illegal removal, alteration and fabrication is also challenging. Hence, in this paper Group Key Management and Sharing Protocol (GKMSP) is proposed. Group Key is produced by the data owner, while private and public keys are generated by the Trusted Authority (TA). The proposed scheme offers improved results in terms of Communication and storage costs based on number of groups, key generation and total times based on number of participants and number of participants based on size of attributes in contrast to benchmarked Self-healing Attribute-based Privacy-aware Data Sharing (SAPDS), Access Control Mechanism for P2P Storage Cloud (ACPC) and Robust and Auditable Access Control (RAAC) schemes.
Cloud data warehouse (CDW) platforms have been offered by many cloud service providers to provide abundant storage and unlimited accessibility service to business users. Sensitive data warehouse (DW) data consisting of dimension and fact data is typically encrypted before it is outsourced to the cloud. However, the query over encrypted DW is not practically supported by any analytical query tools. The Searchable Encryption (SE) technique is palpable for supporting the keyword searches over the encrypted data. Although many SE schemes have introduced their own unique searching methods based on indexing structure on top of searchable encryption techniques, there are no schemes that support Boolean expression queries essential for the search conditions over the DW schema. In this paper, we propose a secure and verifiable searchable encryption scheme with the support of Boolean expressions for CDW. The technical construct of the proposed scheme is based on the combination of Partial Homomorphic Encryption (PHE), B+Tree and Inverted Index, and bitmapping functions to enable privacy-preserving SE with efficient search performance suitable for encrypted DW. To enhance the scalability without requiring a third party to support the verification of search results, we employed blockchain and smart contracts to automate authentication, search index retention, and trapdoor generation. For the evaluation, we conducted comparative experiments to show that our scheme is more proficient and effective than related works.
This paper tackles the complex problem of optimizing resource configuration for microservice management in heterogeneous cloud environments. To address this challenge, an enhanced framework, the multi-objective microservice allocation (MOMA) algorithm, is developed to formulate the efficient resource management of cloud microservice resources as a constrained optimization problem, guided by resource utilization and network communication overhead, which are two important factors in microservice resource allocation. The proposed framework simplifies the deployment of cloud services and streamlines workload monitoring and analysis within a diverse cloud system. A comprehensive comparison is made between the effectiveness of the proposed algorithm and existing algorithms on real-world datasets, with a focus on resource balancing, network overhead, and network reliability. Experimental results reveal that the proposed algorithm significantly enhances resource utilization, reduces network transmission overhead, and improves reliability
In modern cloud computing, the need for flexible and scalable orchestration of services, combined with robust security measures, is paramount. In this paper, we propose an innovative approach for managing secure cloud bursting in Kubernetes, combining Attribute-Based Encryption (ABE) with Kubernetes labeling. Our model addresses the challenges of complexity, cost, and data protection compliance by leveraging both Kubernetes and ABE. We introduce an attribute-based bursting component that uses Kubernetes labels for orchestration, and an encryption component that employs ABE for data protection. This unified management model ensures data confidentiality while enabling efficient cloud bursting. Our approach combines the strengths of label-based orchestration with fine-grained encryption, providing a technologically advanced yet user-friendly solution for secure cloud bursting. We present a proof-of-concept implementation that demonstrates the feasibility and effectiveness of our model. Our approach offers a unified solution that complies with security and privacy laws while meeting the needs of contemporary cloud-based systems.
While blockchain is known to support open and transparent data exchange, partly due to its non-tamperability property, it can also be used to facilitate the spreading of fake and misleading information or information that was subsequently discredited. Hence, this paper proposes a practical, redactable blockchain framework with a public trapdoor (hereafter referred to as PRBFPT). PRBFPT comprises an editing scheme for adding blocks using a new type of blockchain with a chameleon hash. Specifically, PRBFPT is able to involve all nodes in the blockchain in the editing operations by means of a public trapdoor, without requiring additional trapdoor management by predefined nodes or organizations. PRBFPT is also designed to audit and record the content of each editing operation. In other words, after editing and deleting the original data, PRBFPT can still verify its legitimacy. We also propose a contract-based locked voting scheme to better support voting. We then evaluate the prototype implementation of PRBFPT, with a negligible impact on the performance of the original system. In addition, the evaluation findings show that the cost of initiating the special transactions is comparable to the consumption of normal Ethereum transactions and is within a manageable range.
Data deduplication technologies are widely exploited to reduce capacity demands for storage. Previous chunk-based offline deduplication technologies often cause serious performance overhead due to data chunking and indexing. Particularly, they are not efficient for non-volatile memory (NVM) based storage systems because they cannot fully exploit the byte-addressability feature of NVMs for fine-grained deduplication. In this paper, we propose I/O Causality based In-line Deduplication (ICID) to maximize the deduplication ratio for NVM-based storage systems. Unlike previous inline deduplication schemes that use hash indexes to identify duplicate data slices, ICID records memory copy operations in a B-tree structure to achieve causality-based inline deduplication. We propose two novel techniques to manage memory-copy records in the B-tree efficiently. First, to speed up the B-tree lookup, we group memory-copy records targeted to the same page in a B-tree node to improve data locality. Second, we exploit the spatial locality of memory accesses to identify outdated memory-copy records, and delete them in time to reduce memory consumption of the B-tree. We evaluate ICID in a system equipped with Intel Optane DC Persistent Memory Modules. For a typical KV store–LevelDB, our experimental results show that ICID achieves up to 16× higher deduplication ratio and reduces the time cost of data deduplication by 47% on average compared with state-of-the-art deduplication schemes.
Cloud storage allows individuals to store and access data from remote locations, providing the convenience of on-demand access to high-quality cloud applications. This eliminates the need for individuals to manage local hardware and software. The cloud storage system facilitates the efficient storage of data on cloud servers, allowing users to work with their data seamlessly without encountering resource constraints such as memory or storage limitations. Cloud computing is a technology that shows great promise owing to its ability to provide unlimited resources for computing and data storage services. These services are crucial for effectively managing the data according to specific requirements. In the current system, data is saved in the cloud using dynamic data operations and computations. This study explored the underlying principles of scalable data-partitioning techniques in the context of distributed data processing in cloud environments. The significance of this study lies in the increasing dependence of enterprises on cloud platforms for dataintensive tasks such as machine learning, data analytics, and real-time data processing. This study examines several data-partitioning strategies and methodologies developed to address the unique issues posed by cloud systems. The evaluation included an examination of their influence on the scalability, load distribution, and overall efficiency of the system. The main aim of this study is to enhance the domain of cloud-based data processing techniques, thereby enabling enterprises to effectively leverage the full potential of the cloud for data-centric projects.
In the rapidly evolving realm of cloud computing security, this paper introduces an innovative solution to address persistent challenges. The proliferation of cloud technology has brought forth heightened concerns regarding data security, necessitating novel approaches to safeguarding sensitive information. The issue centers on the vulnerability of cloud-stored data, usually necessitating enhanced encryption and key management strategies. Traditional methods usually fall short in mitigating risks associated with compromised encryption keys and centralized key storage. To combat these challenges, our proposed solution encompasses a two-phase approach. In the first phase, dynamic Advanced Encryption Standard (AES) keys are generated, ensuring each file’s encryption with a unique and ever-changing key. This approach significantly enhances file-level security, curtailing an attacker’s ability to decrypt multiple files even if a key is compromised. The second phase introduces blockchain technology, where keys are securely stored with accompanying metadata, bolstering security and data integrity. Elliptic Curve Cryptography (ECC) public key encryption enhances security during transmission and storage, while also facilitating secure file sharing. In conclusion, this comprehensive approach enhances cloud security, providing robust encryption, decentralized key management, and protection against unauthorized access. Its scalability and adaptability make it a valuable asset in contemporary cloud security paradigms, assuring users of data security in the cloud.
Homomorphic encryption (HE) enables third-party servers to perform computations on encrypted user data while preserving privacy. Although conceptually attractive, the speed of software implementations of HE is almost impractical. To address this challenge, various domain-specific architectures have been proposed to accelerate homomorphic evaluation, but efficiency remains a bottleneck. In this paper, we propose a homomorphic evaluation accelerator with heterogeneous reconfigurable modular computing units (RCUs) for the Brakerski/Fan-Vercauteren (B/FV) scheme. RCUs leverage operator abstraction to efficiently perform basic sub-operations of homomorphic evaluation such as residue number system (RNS) conversion, number theoretic transform (NTT), and other modular computations. By combining these sub operations, complex homomorphic evaluation operations like multiplication, rotation, and addition are efficiently executed. To address the high demand for data access and improve memory efficiency, we design a coordinate-based address encoding strategy that enables in-place and conflict-free data access. Furthermore, specific optimizations are performed on the core sub-operations such as NTT and automorphism. The proposed architecture is implemented on Xilinx Virtex-7 and UltraScale+ FPGA platforms and evaluated for polynomials of length 4096. Compared to state-of-the-art accelerators with the same parameter set, our accelerator achieves the following advantages: 1) 2.04× to 3.33× reduction in the area-time product (ATP) for the key sub-operation NTT, 2) 1.08× to 7.42× reduction in latency for homomorphic multiplication with higher area efficiency, and 3) support for a wider range of homomorphic evaluation operations, including rotation, compared to other B/FV-based accelerators.
Containers are recognized for their lightweight and virtualization efficiency, making them a vital element in modern application orchestration. In this context, the scheduler is crucial in strategically distributing containers across diverse computing nodes. This paper presents a novel two-stage container scheduling solution that addresses node imbalances and efficiently deploys containers. The proposed solution formulates the scheduling process as an optimization problem, integrating various objective functions and constraints to enhance server consolidation and minimize energy consumption. The confidentiality of migrated containers is ensured through encryption, and the associated costs are incorporated into the optimization constraints. This approach ensures security in container scheduling, considering container attributes as input features in our proposed attributes-based encryption model. By carefully selecting containers and destination nodes, this work seeks to establish balance within cloud-based clusters. This contributes to the improvement of container orchestration systems and their effectiveness in real-world scenarios. The proposed solution’s efficacy is demonstrated in its ability to efficiently deploy containers in multi-data center cloud environments and seamlessly migrate them between hosts within the same data center or across different data centers. Our results show optimal consolidation with a reduction in the number of running hosts, ranging from 4% to over 18%. Additionally, the solution promotes minimal total power consumption with savings ranging from 3.5 to 16.25 megawatts, while also ensuring balanced server loads.
The exponential growth of textual data in Vision-and-Language Navigation tasks poses significant challenges for data management in large-scale storage systems. Data deduplication has emerged as a practical strategy for data reduction in large-scale storage systems; however, it has also raised security concerns. This paper introduces DEDUCT, an innovative data deduplication method for textual data. DEDUCT employs a hybrid approach that combines cloud-side and client-side deduplication mechanisms to achieve high compression rates while maintaining data security. DEDUCT’s lightweight pre-processing and client-side deduplication make it suitable for resource-constrained devices like IoT devices. It has also been designed to resist side-channel attacks. Experimental evaluations on the Touchdown dataset, consisting of human-written navigation instructions for routes, demonstrate the effectiveness of DEDUCT. It achieves compression rates of nearly 66%, significantly reducing storage requirements while preserving the confidentiality of textual data. This substantial reduction in storage demands can lead to significant cost savings and improved efficiency in large-scale data management systems.
Spectral clustering, a powerful algorithm in the field of AI, holds a significant role despite its inherent high time complexity. For data owners grappling with limitations such as small datasets and restricted computational resources, harnessing the computational capabilities of cloud computing and aggregating data from multiple sources can yield precise spectral clustering results. However, explicit data uploading to cloud servers poses privacy risks. In response to this challenge, we explore the outsourcing dilemma of spectral clustering in a cloud and multi-user environment and propose a quantum-secure and efficient solution. Specifically, by employing the CKKS homomorphic encryption algorithm within a dual non-collusive server model, we formulate a comprehensive and multi-user spectral clustering outsourcing scheme. Our approach addresses privacy concerns by introducing secure computation protocols for L2 norm, exponential function, and negative half power function. We elaborate on efficient computational algorithms for each stage of spectral clustering, ensuring accurate clustering outcomes without compromising dataset privacy. Moreover, in our scheme, users only need to upload their encrypted dataset without requiring direct interaction with each other or the cloud server until obtaining clustering results. Finally, we argue the IND-CPA security of our design and substantiate its accuracy and efficiency through theoretical comparison analysis and experimental evaluations.
With the advent of cloud computing and the vast amount of data produced by IoT wearable devices, outsourcing computation has become a widespread practice in providing health services to individuals and society. Conventional approaches typically focus on either secure data processing or fine-grain access control. Nevertheless, only a limited number of existing solutions consider secure fine-grain access control over the encrypted computational results. Notably, these solutions overlook data owners’ access control. In addition, they almost exclusively focus on data aggregation operations, neglecting multiplication and division operations on encrypted data, which are fundamental operations with significant importance in various application scenarios. In this paper, we present efficient and privacy-preserving schemes for multiplication and division operations with fine-grain data-sharing and user-centric access control capabilities, called SAMM and SAMD, respectively. We utilise a multi-key Paillier homomorphic cryptosystem to allow privacy-preserving computation of data from both single and multiple data owners. Additionally, we integrate ciphertext-policy attribute-based encryption to enable fine-grain sharing with multiple data requesters based on user-centric access control. Through formal security analysis, we demonstrate that these schemes ensure data confidentiality and authorisation. Moreover, the computational cost and communication overhead of our proposed schemes are thoroughly analysed, and our experimental results indicate that these schemes outperform existing state-of-the-art solutions in terms of efficiency, making them well-suited for use in modern IoT wearable healthcare systems.
Dynamic Searchable Symmetric Encryption (DSSE) is a prospective technique in the field of cloud storage for secure search over encrypted data. A DSSE client can issue update queries to an honest-but-curious server for adding or deleting his cipher texts to or from the server and delegate keyword search over those cipher texts to the server. Numerous investigations focus on achieving strong security, like forward and-Type-I−-backward security, to reduce the information leakage of DSSE to the server as much as possible. However, the existing DSSE with such strong security cannot keep search correctness and stable security (or robustness, in short) if irrational queries are issued by the client, like duplicate add or delete queries and the delete queries for removing non-existed entries, to the server unintentionally. Hence, this work proposes two new DSSE schemes, named SR-DSSEa and SR-DSSEb respectively. Both two schemes achieve forward and-Type-I−-backward security while keeping robustness when. irrational queries are issued. In terms of performance, SR-DSSEa has more efficient communication costs and roundtrips than SR-DSSEb . In contrast, SR-DSSEb has a more efficient search performance than SR-DSSEa. Its search performance is close to the existing DSSE scheme with the same security but fails to achieve robustness.
In general, Cloud Computing refers to the storage and access of data through the internet rather than on a local computer hard drive. Cloud services assist in reducing the amount of space and money required for data storage. Data that must be accessed from the cloud must be protected. However, cloud owners and users have a big hurdle in terms of security and personal data privacy. Due to the data owners' lack of trust, they save their data in an encrypted format that is inaccessible to outsiders. The phrase “proxy re-encryption” (PRE) refers to a popular way of delivering encrypted data stored in the cloud. When a data owner wants to share encrypted data with both the data user and the cloud server (proxy), the data owner generates re-encryption data and sends it to the proxy, which can use it to convert the data holder's cipher texts into the user's plaintexts without having to look at the plaintexts beneath. The implementation demonstrates that our proposed work protects privacy while also allowing for secure data sharing via cloud computing.
The public key encryption with keyword search (PEKS) scheme is a cryptographic primitive that allows a cloud server to search for a cipher text without knowing the corresponding keyword used in the search. An expressive PEKS scheme is a variant of the PEKS scheme that supports conjunctive and disjunctive searches (expressive search). Utilizing the expressive properties of an attribute-based encryption (ABE) scheme, most of the expressive PEKS schemes can be constructed from an ABE scheme. In this paper, we first give a brief review of the transformed expressive PEKS scheme by Shen et al. in 2019. Then, we present a keyword guessing attack on Shen et al.’s transformed expressive PEKS scheme and show that an adversary can correctly guess the supposedly hidden keyword
In the realm of data storage and management, secure data deduplication represents a cornerstone technology for optimizing storage space and reducing redundancy. Traditional client-side deduplication approaches, while efficient regarding storage and network traffic, expose vulnerabilities that allow malicious users to infer the existence of specific files through traffic analysis. Even using a Proof of ownership scheme does not guarantee protection from all attack scenarios, specific to data deduplication. This paper introduces a novel secure data deduplication framework employing a deduplication proxy that operates on-premise, effectively mitigating the risk of such inference attacks. By leveraging convergent encryption, and Merkle tree challenges for proof of ownership, our solution ensures that data deduplication does not compromise data privacy or security. The deduplication proxy acts as an intermediary, performing dedu-plication processes on-premise. This approach not only preserves the efficiency benefits of deduplication but also enhances security by preventing external visibility into data traffic patterns. Our implementation, publicly available on Github, demonstrates the efficacy of the method for enforcing end-to-end encryption while maintaining data deduplication's storage-saving advantages. The proposed framework is suitable for organizations aiming to safeguard their data while optimizing storage resources.
Chronic Kidney Disease is one of the most critical illness nowadays and proper diagnosis is required as soon as possible. Machine learning technique has become reliable for medical treatment. With the help of a machine learning classifier algorithms, the doctor can detect the disease on time. For this perspective, Chronic Kidney Disease prediction has been discussed in this article. Chronic Kidney Disease dataset has been taken from the UCI repository. Seven classifier algorithms have been applied in this research such as artificial neural network, C5.0, Chi-square Automatic interaction detector, logistic regression, linear support vector machine with penalty L1 & with penalty L2 and random tree. The important feature selection technique was also applied to the dataset. For each classifier, the results have been computed based on (i) full features, (ii) correlation-based feature selection, (iii) Wrapper method feature selection, (iv) Least absolute shrinkage and selection operator regression, (v) synthetic minority over-sampling technique with least absolute shrinkage and selection operator regression selected features, (vi) synthetic minority over sampling technique with full features. From the results, it is marked that LSVM with penalty L2 is giving the highest accuracy of 98.86% in synthetic minority over-sampling technique with full features. Along with accuracy, precision, recall, F-measure, area under the curve and GINI coefficient have been computed and compared results of various algorithms have been shown in the graph. Least absolute shrinkage and selection operator regression selected features with synthetic minority over-sampling technique gave the best after synthetic minority over-sampling technique with full features. In the synthetic minority over-sampling technique with least absolute shrinkage and selection operator selected features, again linear support vector machine gave the highest accuracy of 98.46%. Along with machine learning models one deep neural network has been applied on the same dataset and it has been noted that deep neural network achieved the highest accuracy of 99.6%.
Employee turnover poses a critical challenge that affects many organizations globally. Although advanced machine learning algorithms offer promising solutions for predicting turnover, their effectiveness in real-world scenarios is often limited because of their inability to fully utilize the relational structure within tabulated employee data. To address this gap, this study introduces a promising framework that converts traditional tabular employee data into a knowledge graph structure, harnessing the power of Graph Convolutional Networks (GCN) for more nuanced feature extraction. The proposed methodology extends beyond prediction and incorporates explainable artificial intelligence (XAI) techniques to unearth the pivotal factors influencing an employee’s decision to either remain with or depart from a particular organization. The empirical analysis was conducted using a comprehensive dataset from IBM that includes the records of 1,470 employees. We benchmarked the performance against five prevalent machine learning models and observed that our enhanced linear Support Vector Machine (L-SVM) model, combined with knowledge-graph-based features, achieved an impressive accuracy of 0.925. Moreover, the successful integration of XAI techniques for attribute evaluation sheds light on the significant impact of job environment, job satisfaction, and job involvement on turnover intentions. This study not only furthers the development of advanced predictive models for employee turnover but also provides organizations with actionable insights to strategically address and reduce turnover rates.
Students are a gold asset for each country. Proper guidance/recommendation to the students regarding their education-related issues can ultimately result in uplifting the economy of a country. Different education models are followed in the world, out of which Outcome Based Education (OBE) is the one. OBE education model comprises three main components, which include Program Educational Objectives (PEOs), Program Learning Outcomes (PLOs), and Course Learning Outcomes (CLOs). CLOs are outcomes that a student achieves after studying a course. A single course may contain one or more CLOs. These CLOs are then mapped to PLOs and PLOs are then mapped to PEOs. Therefore, our objective in this work is to improve deficient/weak CLOs of students by suggesting online resources. Whereas, in the absence of this proposed system, students have to find out these resources by themselves or course teacher recommends relevant online resources. To achieve this objective, we created a dataset for the OBE education model, as to date no standard dataset exists on the OBE education model. From this dataset, we created a Student-to-CLO matrix and performed bi-clustering on this matrix to find groups of students having similar performance in different CLOs. So far, Bi-clustering has been used in the Bio-informatics field to determine similarity in gene expression data. Generated bi-clusters are sorted according to their homogeneity of contained values. These sorted bi-clusters are then mapped to a 2D grid to formulate a reinforcement-learning environment. The start state of the recommendation agent is determined using cosine similarity. If an agent visits a state, deficient CLOs of that state are recommended to the student. The agent can visit only those states that are nearby to its current state and accessible through its legal action space. An optimal sequence of actions to visit different states of a 2D grid, which can improve student’s performance, is determined using Q-learning. Online resources including research articles, YouTube videos, books, and online tutorials are suggested to the student to improve his deficient CLOs using a mobile app.
Depression constitutes a significant mental health condition, impacting an individual’s emotional state, thought processes, and ability to carry out everyday tasks. Depression is defined by ongoing feelings of sadness, diminished interest in previously enjoyed activities, alterations in hunger, sleep disturbances, decreased vitality, and challenges with focus. The impact of depression extends beyond the individual, affecting society at large through decreased productivity and higher healthcare costs. In the realm of social media, users often express their thoughts and emotions through posts, which can provide insightful data for identifying patterns of depression. This research aims to detect depression early by analyzing social media user content with machine learning techniques. We have built advanced machine learning models using a benchmark depression database containing 20,000 tagged tweets from user profiles identified as depressed or non-depressed. We are introducing an innovative BERT-RF feature engineering method that extracts Contextualized Embeddings and Probabilistic Features from textual input. The Bidirectional Encoder Representations from Transformers (BERT) model, based on the Transformer architecture, is used to extract Contextualized Embedding features. These features are then fed into a random forest model to generate class probabilistic features. These prominent features aid in enhancing the identification of depression from social media. In order to classify tweets using the features derived from the BERT-RF features selection step, we have used five popular classifiers: Random Forest (RF), Multilayer Perceptron (MLP), K-Neighbors Classifier (KNC), Logistic Regression (LR), and Long Short-Term Memory (LSTM). Evaluation experiments show that our approach, using BERT-RF for feature engineering, enables the Logistic Regression model to outperform state-of-the-art methods with a high accuracy score of 99%. We have validated the results through k-fold cross-validation and statistical T-tests. We achieved 99% k-fold accuracy during the validation of the proposed approach. This research contributes significantly to computational linguistics and mental health analytics by providing a robust approach to the early detection of user depression from social media content.
The global impact of the COVID-19 pandemic was huge, and it showed that reporting and collecting accurate healthcare data are crucial operations for governments. Not only the test results but also the vaccination information of people should be shared correctly by trusted systems between countries. Now it is possible with the integration of new practices like blockchain and cryptology with the help of secure, transparent, and privacy-centric methods. There are a lot of recent studies focusing on blockchain usage in healthcare in literature. However, they still have flaws in granting full authorization to individuals, ensuring the security of personal information, speed, and scalability. They mostly use private or consortium blockchains. However, in a public blockchain, a system that everyone can participate in and follow provides more reliable information. At the same time, heavy and slow encryption techniques were used in the models proposed in the literature. Our study focuses on the usage of blockchain in combating pandemics by ensuring privacy and maintaining reliable medical data sharing during pandemics. The proposed system is implemented by leveraging public blockchain on Ethereum with smart contracts, IPFS for decentralized storage, and robust and fast encryption techniques like ChaCha20. In addition to existing techniques, the framework introduces innovative methods, such as storing encrypted keys alongside encrypted data in IPFS, which enhances security and scalability. We also eliminate the usage of doctors’ private keys. The framework grants patient’s full ownership of their medical data. Patients can grant or revoke access to their data, enhancing their control over personal information. The use of smart contracts to manage access rights ensures that only authorized parties can access the data, and patients can easily manage these permissions through a decentralized platform. We aim to implement a framework which is fast, easy to use and differs in terms of storing and sharing medical data with different encryption methods and protocols by using a public blockchain. We ensure novel management of COVID-19 medical records that are personal data.
Blockchain technology has fascinated and significantly sparked research activities and industries due to its transparency in transactions and data sharing. The striking proliferation of blockchain technology is used to confront the challenges associated with trust, transparency, security, centralization, supply chain traceability, and regulatory compliance. The promising technology has the potential to enable several boons across diverse fields like healthcare, supply chain management, manufacturing, cross-border payments, finance, and energy trading. Telemedicine primarily aims to facilitate the transmission of healthcare data through electronic channels, enabling users to access medical services. It supports healthcare services around the globe, aids in early diagnosis and treatment, and assists with remote care by provisioning effective healthcare that is safe, secure, and reliable. However, there are challenges associated with Personal Health Records (PHR) due to lack of ownership of the data, accuracy, reliability, and data transaction security. The architecture presented in this work enables us to benefit from telemedicine and securely share PHR. The proposed model offers a permissioned network-based Hyperledger Fabric blockchain framework for secure PHR sharing amongst healthcare providers. The Byzantine Fault Tolerance consensus mechanism protects patient privacy, and the IPFS protocol is used to store the data off-chain. Additionally, the smart contract is utilized for providing patients with granular access control over PHR data. Hyperledger Caliper is used as a benchmarking tool to test this technology and determine the average latency rate for viewing and updating the PHR by healthcare providers along with analysis of CPU and memory utilization. As a result, our goals in telemedicine to improve secure sharing and give the patient access control are achieved.
The study investigates the intricate influence of gender and age variability in individuals diagnosed with diabetes, aiming to gain a comprehensive understanding of the diverse impact and implications of this prevalent metabolic disorder. A real-world dataset, obtained from a renowned diabetologist and meticulously maintained by Dr. Reddys’ Lab, serves as the foundation for rigorous analysis. Leveraging the capabilities of ensemble learning, an advanced technique that combines multiple models, the predictive model’s efficiency is substantially enhanced, resulting in precise and reliable predictions of individuals’ diabetic status. Addressing the challenge of diabetes prediction, a novel ensemble learning model was proposed. The model combines the strengths of three distinct algorithms: Random Forest, Extra Trees, and Multilayer Perceptron (MLP). The model’s output comprises a ternary label categorizing individuals as ‘‘diabetic, non-diabetic, or pre-diabetic’’, while the accompanying prediction score quantifies the likelihood of individuals belonging to each respective category. The findings of this research expand the existing body of knowledge on diabetes prediction, underscoring the untapped potential of ensemble learning methodologies in augmenting accuracy and predictive performance for diabetic patients.
A land administration system (LAS) is a structured framework designed to govern the management of land resources in a specific region or country. However, LAS faces challenges like inefficiencies, a lack of transparency, and susceptibility to fraud. The digitization of land records improved efficiency but failed to address manipulation, centralized databases, and double-spending issues. Traditional lease and mortgage management systems also suffer from complexity, errors, and a lack of real-time validation. At present, a significant influx of land transactions produces substantial data, classifiable as big data due to constant minute-to-minute occurrences like land transfers, acquisitions, document verification, and leasing/mortgaging transactions. In this context, we present a Blockchain-driven system that not only tackles alteration and double-spending issues in traditional systems but also implements distributed data management. Current state-of-the-art solutions do not fully incorporate crucial features of Blockchain, such as transparency, prevention of double-spending, auditability, immutability, and user participation. To tackle this problem, this research introduces a comprehensive Blockchain-powered framework for lease and mortgage management, addressing transparency, user involvement, and double-spending prevention. Unlike existing solutions, our framework integrates key Blockchain characteristics for a holistic approach. Through practical use cases involving property owners, banks, and financial institutions, we establish a secure, distributed, and transparent method for property financing. We verify the system by employing smart contracts and assess the cost and security parameters while validating the blockchain-based mortgage and lease functions.
Identification of the type of gun used is essential in several fields, including forensics, the military, and defense. In this research, one of the powerful deep learning architectures is applied to identify several types of firearms based on their gunshot noises. For the purpose of extracting features from the audio data, the suggested technique makes use of YAMNet, an effective deep learning-based classification model. The Mel spectrograms created from the collected features are used for multi-class audio classification, which makes it possible to identify different types of guns. 1174 audio samples from 12 distinct weapons make up the study’s extensive dataset, which offers a varied and representative collection for training and evaluation. We achieve a remarkable accuracy of 94.96% by employing the best hyperparameter changes and optimization methods. The findings of this study make a substantial contribution to the domains of forensics, military, and defense, where precise gun type identification is crucial. Applying deep learning and mel spectrograms to analyze gunshot audio demonstrates itself to be a promising strategy, providing quick and accurate categorization. This research emphasizes the effectiveness and relevance of using YAMNet, an AI-driven model, as a superior answer to the issues of real-world weapon detection.
Diabetes is the ‘‘Mother of all Diseases’’ as it affects multiple organs of body of an individual in some way. Its timely detection and management are critically important. Otherwise, the long run, it can cause several complications in a diabetic. Heart disease is one of the major complications of diabetes.This work proposed an Optimal Scrutiny Boosted Graph Convolutional LSTM (O-SBGC-LSTM), SBGC-LSTM enhanced by Eurygaster Optimization Algorithm (EOA) to tune hyperparameters for early prevention and detection of diabetes disease. This work proposed an Optimal Scrutiny Boosted Graph Convolutional LSTM (O-SBGC-LSTM), SBGC-LSTM enhanced by Eurygaster Optimization Algorithm (EOA) to tune hyperparameters for early prevention and detection of diabetes disease. This method not only captures discriminative features in spatial configuration and temporal dynamics but also explore the co-occurrence relationship between spatial and temporal domains. This method also presents a temporal hierarchical architecture to increase temporal receptive fields of top SBGC-LSTM layer, which boosts the ability to learn high-level semantic representation and significantly reduces computation cost. The performance of O-SBGC-LSTM was found overall to be satisfactory, reaching >98% accuracy in most studies. In comparison with classic machine learning approaches, proposed hybrid DL was found to achieve better performance in almost all studies that reported such comparison outcomes. Furthermore, prevention is better than cure. Additionally, employed fuzzy based inference techniques to enhance the prevention procedure using suggestion table.
It is non-trivial to provide semantic security for user data while achieving deduplication in cloud storage. Some studies deploy a trusted party to store deterministic tags for recording data popularity, then provide different levels of security for data according to popularity. However, deterministic tags are vulnerable to offline brute-force attacks. In this article, we first propose a popularity-based secure deduplication scheme with fully random tags, which avoids the storage of deterministic tags. Our scheme uses homomorphic encryption (HE) to generate comparable random tags to record data popularity and then uses the binary search in the AVL tree to accelerate the tag comparisons. Besides, we find the popularity tamper attacks in existing schemes and design a proof of ownership (PoW) protocol against it. To achieve scalability and updatability, we introduce the multi-key homomorphic proxy re-encryption (MKH-PRE) to design a multi-tenant scheme. Users in different tenants generate tags using different key pairs, and the cross-tenant tags can be compared for equality. Meanwhile, our multi-tenant scheme supports efficient key updates. We give comprehensive security analysis and conduct performance evaluations based on both synthetic and real-world datasets. The results show that our schemes achieve efficient data encryption and key update, and have high storage efficiency.
Lightweight devices in the Internet of Things (IoT) typically need to store massive data on a cloud server with strong processing and storage capabilities for later retrieval and usage. Since these data contain the participant’s sensitive information, they cannot be delivered directly to the cloud server. Public-key Encryption with Keyword Search (PEKS) allows customers to search for target-encrypted files using keywords. However, the majority of PEKS implementations are unable to repel malicious quantum-capable attackers. And with regard to forward security, they must search for many rounds to obtain the necessary data. To resolve these concerns, we propose a comprehensive Inner Product Outsourcing PEKS system (IPO-PEKS) with forward security based on LWE assumptions, which raises search efficiency by allowing authorized clients to find the information they desire in a single round and achieves more fine-grained searches. Furthermore, we offer an inner product outsourcing calculation technique that allows the server to compute the inner product result without knowing the details of both parties in order to conceal the relevant privacy data of transmitting and decryption states. The paradigm can be utilized for efficient state transition through the use of parallel computing to accomplish the target of one round of iteration.
The Internet of Things (IoT) has expanded rapidly, generating vast amounts of data that require effective protection. Traditionally, the attribute-based searchable encryption (ABSE) been used within IoT to maintain data privacy and integrity. However, existing ABSE approaches often rely on centralized systems that are susceptible to single-point failures and are inefficient in terms of decryption and data retrieval. Therefore, we propose a novel blockchain-assisted multi-keyword searchable encryption scheme (BAMKSE) that utilizes the decentralized nature of blockchain to overcome these limitations. This scheme integrates ciphertext policy attribute-based encryption (CP-ABE) with key policy attribute-based encryption (KP-ABE) to provide advanced access control and mutual authentication, while offloading the computational burden of decryption to cloud-based pre-decryption services. Security analyses demonstrate the resilience of BAMKSE against chosen keywords attacks (CKAs) and chosen-plaintext attacks (CPAs), offering an effective solution for secure and efficient IoT data handling
In the era of digital advancements, safeguarding medical data holds significant importance. This article introduces a novel approach to encrypting images through public-key encryption, incorporating the properties of Elliptic Curve Cryptography (ECC) and the Blum-Goldwasser Cryptosystem (BGC). The proposed method capitalizes on the chaotic properties of a sequence generator to augment the randomness in the encrypted image. The encryption process initiates with a secure key exchange mechanism using elliptic curves and the Blum-Goldwasser Cryptosystem. Pixel randomization is achieved through a chaotic map, followed by encryption using ECC and BGC, which integrates the discrete logarithmic problem, probabilistic encryption, and the quadratic residuosity problem. Both ECC and BGC components contribute to unpredictability and complexity, fortifying the security measures. The amalgamation of these cryptographic techniques provides resilience against cyber threats such as brute-force attacks and differential cryptanalysis. Thorough simulations and performance assessments affirm the effectiveness and computational efficiency of this hybrid approach when compared to existing methods. The experimental values of information entropy, average correlation, NPCR and UACI are 7.9998, 0.0010, 99.6901% and 33.5260% respectively. The total time taken for the proposed methodology is 0.142 seconds. These values indicates that the proposed hybrid chaotic image encryption method displays promise for diverse applications.
Cloud computing has transformed digital service delivery by providing scalable, flexible access to computing resources, including servers, storage, and applications, under a pay-per-use model. This model utilizes geographically distributed data centers to enhance service delivery and dynamically adjust Service Level Agreement (SLA) pricing based on demand. However, effective resource allocation strategies remain challenging, especially for ensuring low latency and fast execution in real-time applications and interactive services. Increased data center load can degrade performance, impacting cost and productivity. To address these challenges, we developed the Intelligent Validation Cloud Broker System (IVCBS). Uses an algorithm to classify virtual machine (VM) resources and match them with users’ request sizes, relying on a mathematical model aligned with the trapezoidal membership function in fuzzy logic. This reduces fuzzy rules and improves decision-making accuracy. We tested 11 types of AWS General Purpose EC2 specifications across 31 data centers in various regions. Implementing and comparing IVCBS with a traditional method through two policies—optimize response time and dynamically reconfigure load—showed that the IVCBS with optimized response time policy outperformed in terms of overall response time, data center processing, and total VM cost. IVCBS addresses scalability and performance challenges by efficiently assigning VMs, managing workload distribution, and preventing data center overload. Improving the average data center request servicing time maintains a high quality of service (QoS) and energy optimization.
Cloud computing has been imperative for computing systems worldwide since its inception. The researchers strive to leverage the efficient utilization of cloud resources to execute workload quickly in addition to providing better quality of service. Among several challenges on the cloud, task scheduling is one of the fundamental NP-hard problems. Meta-heuristic algorithms are extensively employed to solve task scheduling as a discrete optimization problem and therefore several meta-heuristic algorithms have been developed. However, they have their own strengths and weaknesses. Local optima, poor convergence, high execution time, and scalability are the predominant issues among meta-heuristic algorithms. In this paper, a parallel enhanced whale optimization algorithm is proposed to schedule independent tasks in the cloud with heterogeneous resources. The proposed algorithm improves solution diversity and avoids local optima using a modified encircling maneuver and an adaptive bubble net attacking mechanism. The parallelization technique keeps the execution time low despite its internal complexity. The proposed algorithm minimizes the makespan while improving resource utilization and throughput. It demonstrates the effectiveness of the proposed PEWOA against the best performing enhanced whale optimization algorithm (WOAmM) and Multi-core Random Matrix Particle Swarm Optimization (MRMPSO). The algorithm consistently produces better results with varying number of tasks on GoCJ dataset, indicating better scalability. The experiments are conducted in CloudSim utilizing a variety of GoCJ and HCSP instances. Various statistical tests are also conducted to evaluate the significance of the results.
The increased digitalization of society raises concerns regarding data protection and user privacy, and criticism on how the companies handle user data without being transparent and without providing adequate mechanisms for users to control how their own data is being processed or shared. To address this problem and open the way for a secure and efficient society, where the privacy of citizens is paramount, the identity concept and proof of identity mechanisms need to be redesigned from the ground up. In this paper we discuss how the emerging Web3 technologies like distributed ledger technology (DLT), blockchain, smart contracts, decentralized storage systems, and crypto wallets can be leveraged to design and implement a decentralized digital identity system based on decentralized identifiers (DID) and self-sovereign identities (SSI). Such a system puts the users in full control over their own data while also providing a solid backbone for building interoperable systems that are secure, scalable, and efficient. We propose different architectures for the decentralized identity infrastructure and storage layer, and also discuss the mapping of these architectures on cloud platforms. The main goal is to provide an architectural blueprint for a scalable, secure, privacy-preserving and trusted system.
With the employment of IoT technology and cloud computing in healthcare, outsourcing Electronic Health Records (EHRs) generated by IoT devices becomes the critical issue. In fact, as EHRs may be collected by different units within a hospital and traverse intranet or public networks, this introduces vulnerabilities to privacy breaches. Existing research efforts employ cloud-based access control with encryption solutions to secure outsourced EHRs. Nevertheless, in the context of an IoT cloud data-sharing environment, where data originates from numerous devices and user authorization status undergoes frequent changes, a gap persists in achieving a comprehensive and systematic integration of secure IoT data transfer and aggregation, coupled with efficient user revocation procedures. In this research, we proposed a blockchain-based access control scheme for outsourced IoT-enabled EHRs in fog-assisted cloud environment. Our proposed scheme achieves secure and fine-grained access control with scalable and efficient revocation based on the pseudorandom encryption, symmetric encryption, and ciphertext-policy attribute-based encryption (CP-ABE), and graph-based modeling. In our scheme, we utilized fog computing to transfer the resource-intensive tasks of encrypting and decrypting CP-ABE. Additionally, we introduced an adaptive load sharing algorithm to facilitate effective distribution of workloads among fog nodes. Moreover, we integrate blockchain technology to perform user authentication and verify the integrity of the EHRs within the system. Finally, we conducted a security analysis, comparative computation analysis, and experiments, demonstrating that the encryption and decryption costs of our scheme are comparable to related works. Furthermore, our proposed ciphertext retrieval mechanism, which is essential for ciphertext re-encryption resulting from user revocation, is more efficient than the traditional method.
Edge computing aims to push services closer to end users, greatly enhancing latency and scale. Yet, there’s untapped potential beyond the network’s last mile, on the extreme edge. Extreme edge computing (XEC) is a computing paradigm that exploits computational resources in the end-user’s immediate vicinity. Edge crowd computing (ECC) is an orchestrated sharing economy model within XEC that uses idle resources on user owned devices for service provision, compensating owners. We analyze an orchestrated ECC where devices rent resources in exchange for incentives. Our incentive-vacation queueing (IVQ) model associates performance with incentive payments using vacation queueing, considering the multitenancy of devices through a server vacation dependent on incentives received. In this article, we offer a framework for analyzing any sharing economy system that can be modeled using IVQ. We discuss the relationship between incentives and vacations on performance, namely, the incentive-vacation or IVQ function. We examine two families of IVQ functions that can be adjusted to benefit either the orchestrator or the worker and introduce a performance metric for such preference. We derive analytical expressions for system performance that consider the random nature of worker devices’ availability due to fluctuating incentives. The IVQ model explores commodifying user-owned resources in an ECC system, presenting a general approach for performance analysis in such environments.
Combining multi-key fully homomorphic encryption (MKFHE) and identity-based encryption (IBE) to construct multi-identity based fully homomorphic encryption (MIBFHE) scheme can not only realize homomorphic operations and flexible access control on identity ciphertexts but also reduce the burden of public key certification management. However, MKFHE schemes used to construct MIBFHE usually have complex construction and large computational complexity, which also causes the same problem for MIBFHE schemes. To solve this problem, we construct a concise and efficient MIBFHE scheme based on the learning with errors (LWE) problem. Firstly, we construct an MKFHE scheme using a new method called ‘‘the decomposition method’’. Secondly, we make a suitable deformation of the current IBE scheme. Finally, we combine the above MKFHE scheme with IBE scheme to construct our MIBFHE scheme and prove its IND-sID-CPA security under the LWE assumption in the random oracle model. The analysis results show that our MIBFHE scheme can generate the extended ciphertext directly from the encryption algorithm, without generating fresh ciphertext in advance. In addition, the noise expansion rate is reduced from the polynomial of lattice dimension n and modulus q to the constant K of the maximum number of users. The scale of introduced auxiliary ciphertexts is reduced from O˜ (n 4L 4) to O˜ (n 2L 4 ) when generating the extended ciphertext.
Open-cry electronic auctions have revolutionized the landscape of high-value transactions for buying and selling goods. Online platforms such as eBay and Tradera have popularized these auctions due to their global accessibility and convenience. However, these centralized auctioning platforms rely on trust in a central entity to manage and control the processing of bids, e.g., the submission time and validity. The use of blockchain technologies for constructing decentralized systems has gained popularity for their versatility and useful properties toward decentralization. However, blockchain-based open-cry auctions, are sensitive to the order of transactions and deadlines which, in the absence of a governing party, need to be provided in the system design. In this paper, we identify the key properties for the development of decentralized open-cry auctioning systems, including verifiability, transaction immutability, ordering, and time synchronization. Three prominent blockchain platforms, namely, Ethereum, Hyperledger Fabric, and R3 Corda were analyzed in terms of their capabilities to ensure these properties for gap identification. We propose a solution design that addresses these key properties and presents a proof-of-concept (PoC) implementation of such design. Our PoC uses Hyperledger Fabric and mitigates the identified gaps related to the time synchronization of this system by utilizing an external component. During the chaincode execution, the creation and submission of bids initiate requests to the time service API. This API service retrieves trusted timestamps from NTP services to obtain accurate bid times. We then analyzed the system design and implementation in the context of the identified key properties. Lastly, we conducted a performance evaluation of the time service and the PoC system implementation in time-sensitive scenarios and assessed its overall performance.
Conventional communications systems centered on data prioritize maximizing network throughput using Shannon’s theory, which is primarily concerned with securely transmitting the data despite limited radio resources. However, in the realm of edge learning, these methods frequently fall short because they depend on traditional source coding and channel coding principles, ultimately failing to improve learning performance. Consequently, it is crucial to transition from a data-centric viewpoint to a task-oriented communications approach in wireless system design. Therefore, in this paper, we propose efficient communications under a task-oriented principle by optimizing power allocation and edge learning-error prediction in an edge-aided non-orthogonal multiple access (NOMA) network. Furthermore, we propose a novel approach based on the ant colony optimization (ACO) algorithm to jointly minimize the learning error and optimize the power allocation variables. Moreover, we investigate four additional benchmark schemes (particle swarm optimization, quantum particle swarm optimization, cuckoo search, and butterfly optimization algorithms). Satisfactorily, simulation results validate the superiority of the ACO algorithm over the baseline schemes, achieving the best performance with less computation time. In addition, the integration of NOMA in the proposed task-oriented edge learning system obtains higher sum rate values than those achieved by conventional schemes.
In the current digital landscape, almost everyone is on social media or various social media platforms. People use social media for a plethora of purposes, which include staying connected with friends and family, accessing information and updates about ongoing events, entertainment, networking with professionals, expressing themselves to a wide range of users, promoting businesses, joining online communities and engaging in various activities which has led to an increase in the consumption and usage of online social networks (OSN). One of the reasons for such a growth is their features such as ubiquitous access, on-demand service, friendship networks, user engagement strategies like recommendation engines, etc. However, there are various limitations to the current approach, such as the centralization of control, lack of data ownership, poor access control, fake news, bot accounts, censorship, digital rights management issues, etc. To address these limitations, a paradigm shift is necessary. This paper aims to develop a social media application where every post can be converted to a Non-Fungible Token (NFT) and be sold to earn money. Interplanetary File System (IPFS) is used as the decentralized storage. Algorithms for all the functionalities of the applications are given along with an algorithm for a reputation score for every user and their posts in social media are also proposed.
Heterogeneous Information Networks have bunches of rich secret information that assist us in the creation of successful recommendation frameworks. A Heterogeneous Information Network (HIN) includes useful knowledge required for a recommendation system, and the network embedding is the common strategy for getting useful knowledge out of a HIN to be used in recommendation platforms. Although user and business nodes have been used in HINs, review contents have not been used. In this work, we use review nodes in HINs in addition to user and business nodes. Since written reviews provide valuable insights into points of interest within recommendation systems, integrating review nodes into HINs allows us to assess their impact on recommendation systems. Specifying meaningful meta-paths aids in extracting hidden information within a heterogeneous information network. While user and business nodes are typically utilized for specifying meaningful meta-paths, review nodes have been overlooked. We introduce new meta-paths incorporating review nodes to uncover hidden information in heterogeneous information networks. These meta-paths are leveraged to enhance the recommendation system’s performance. This study endeavors to amalgamate rich written reviews with heterogeneous information networks and analyze their effects on recommendation systems. Our experiments demonstrate that incorporating review texts improves the recommendation system, particularly when selecting meaningful meta-paths. Augmenting HINs with reviews facilitates the capture of additional relational information between users and businesses, thereby enhancing the recommendation model. This underscores the benefits of consolidating interaction information within HIN features for superior recommendation outcomes.
In the ever-expanding domain of Internet of Things (IoT) networks, Distributed Denial of Service (DDoS) attacks represent a significant challenge, compromising the reliability of these systems. Traditional centralized detection methods struggle to cope effectively in the widespread and diverse environment of IoT, leading to the exploration of decentralized approaches. This study introduces a Federated Learning-based approach, named Federated Learning for Decentralized DDoS Attack Detection (FL-DAD), which utilizes Convolutional Neural Networks (CNN) to efficiently identify DDoS attacks at the source. Our approach prioritizes data privacy by processing data locally, thereby avoiding the need for central data collection, while enhancing detection efficiency. Evaluated using the comprehensive compared with conventional centralized detection methods, FL-DAD achieves superior performance, illustrating the potential of federated learning to enhance intrusion detection systems in large-scale IoT networks by balancing data security with analytical effectiveness.
In the realm of wireless sensor networks (WSNs), preserving data integrity, privacy, and security against cyberthreats is paramount. Proxy re-encryption (PRE) plays a pivotal role in ensuring secure intra-network communication. However, existing PRE solutions encounter persistent challenges, including processing delays due to the transfer of substantial data to the proxy for re-encryption and the computational intensity of asymmetric cryptography. This study introduces an innovative PRE scheme that is meticulously customized for WSNs to enhance the secure communication between nodes within the network and external data server. The proposed PRE scheme optimizes efficiency by integrating lightweight symmetric and asymmetric cryptographic techniques, thereby minimizing computational costs during PRE operations and conserving energy for resource-constrained nodes. In addition, the scheme incorporates sophisticated key management and digital certificates to ensure secure key generation and distribution, which in turn, facilitates seamless authentication and scalable data sharing among the entities in WSN. This scheme maintains sensor-node data encryption and delegates secure re-encryption tasks exclusively to cluster heads, thereby reinforcing data privacy and integrity. Comprehensive evaluations of security, performance, and energy consumption validated the robustness of the scheme. The results confirm that the proposed PRE scheme significantly enhances the security, efficiency, and overall network lifetime of WSNs.
By side-channel attacks, a fraction part of secret keys used in cryptographic schemes could be leaked to adversaries. Recently, adversaries have realized practical side-channel attacks so that these existing cryptographic schemes could be broken. Indeed, researchers have invested and proposed a good approach to withstand such attacks, called as leakage-resilient cryptography. Very recently, several leakage-resilient anonymous multi-receiver encryption (LR-AMRE) schemes based on various public-key systems were also proposed. However, these LR-AMRE schemes are not suitable for a heterogeneous public-key environment under which an authorized receiver group includes heterogeneous receivers under various PKS settings and these receivers have various types of secret/public key pairs. In this article, we propose the first leakage-resilient anonymous heterogeneous multi-receiver hybrid encryption (LR-AHMR-HE) scheme for the heterogeneous public-key system settings. A new framework and associated adversary games of the LR-AHMR-HE scheme are defined. In the adversary games, adversaries are admitted to continuously intercept a fraction part of secret keys. Under the adversary games, formal security proofs are provided to show that the proposed scheme is secure against two types of adversaries (illegitimate user and malicious authority). Comparisons with several related previous schemes are demonstrated to present the merits of our scheme.
Cloud storage is now widely used, but its reliability has always been a major concern. Cloud block storage (CBS) is a famous type of cloud storage. It has the closest architecture to the underlying storage and can provide interfaces for other types. Data modifications in CBS have potential risks such as null reference or data loss. Formal verification of these operations can improve the reliability of CBS to some extent. Although separation logic is a mainstream approach to verifying program correctness, the complex architecture of CBS creates some challenges for verifications. This paper develops a proof system based on separation logic for verifying the CBS data modifications. The proof system can represent the CBS architecture, describe the properties of the CBS system state, and specify the behavior of CBS data modifications. Using the interactive verification approach from Coq, the proof system is implemented as a verification tool. With this tool, the paper builds machine-checked proofs for the functional correctness of CBS data modifications. This work can thus analyze the reliability of cloud storage from a formal perspective.
With the advent of the Internet of Things (IoT) paradigm and the prolific growth in technology, the volume of data generated by intelligent devices has increased tremendously. Cloud computing provides unlimited processing and storage capabilities to process and store the generated data. However, the cloud computing paradigm is associated with high transmission latency, high energy consumption, and a lack of location awareness. On the other hand, the data generated by the intelligent devices is delay-sensitive and needs to be processed on the fly. Thus, cloud computing isn’t suitable for the execution of this delay sensitive data. To curtail the issues associated with the cloud paradigm, the fog paradigm, which allows data to be processed at the proximity of IoT devices, was introduced. One common feature of the fog paradigm is its limitations in capabilities, which make it unsuitable for processing large volumes of data. To ensure the smooth execution of delay-sensitive application tasks and the large volume of data generated, there is a need for the fog paradigm to collaborate with the cloud paradigm to achieve a common goal. In this paper, an efficient resource allocation framework is proposed to efficiently and effectively utilise the fog and cloud resources for executing delay-sensitive tasks and the huge volume of data generated by end users. The allocation of resources to tasks is done in two stages. Firstly, the tasks in the arrival queue are classified based on the task guarantee ratio on the cloud and fog layers and allocated to suitable resources in the layers of their respective classes. Secondly, we apply Bayes’ classifier to previous allocation history data to classify newly arrived tasks and allocate suitable resources to the tasks for execution in the layers of their respective classes. A Crayfish Optimization Algorithm (COA) is used to generate an optimal resource allocation in both the fog and cloud layers that reduces the delay and execution time of the system. The proposed method is implemented using the iFogSim simulator toolkit, and the execution results prove more promising in comparison with the state-of-the-art methods
Under the general trend of the rapid development of smart grids, data security and privacy are facing serious challenges; protecting the privacy data of single users under the premise of obtaining user-aggregated data has attracted widespread attention. In this study, we propose an encryption scheme on the basis of differential privacy for the problem of user privacy leakage when aggregating data from multiple smart meters. First, we use an improved homomorphic encryption method to realize the encryption aggregation of users’ data. Second, we propose a double-blind noise addition protocol to generate distributed noise through interaction between users and a cloud platform to prevent semi-honest participants from stealing data by colluding with one another. Finally, the simulation results show that the proposed scheme can encrypt the transmission of multi-intelligent meter data under the premise of satisfying the differential privacy mechanism. Even if an attacker has enough background knowledge, the security of the electricity information of one another can be ensured.
Container technology is getting popular in cloud environments due to its lightweight feature and convenient deployment. The container registry plays a critical role in container based clouds, as many container startups involve downloading layer-structured container images from a container registry. However, the container registry is struggling to efficiently manage images (i.e., transfer and store) with the emergence of diverse services and new image formats. The reason is that the container registry manages images uniformly at layer granularity. On the one hand, such uniform layer-level management probably cannot fit the various requirements of different kinds of containerized services well. On the other hand, new image formats organizing data in blocks or files cannot benefit from such uniform layer level image management. In this paper, we perform the first analysis of image traces at multiple granularities for various services and provide an in-depth comparison of different image formats. The traces are collected from a production-level container registry, amounting to 24 million requests and involving more than 184 TB of transferred data. We provide a number of valuable insights, including request patterns of services, file-level access patterns, and bottlenecks associated with different image formats. Based on these insights, we also propose two optimizations to improve image transfer and application deployment.
Increasing the security of block ciphers is a topic of great interest today, and thus there is a variety of work to enhance the strength of such ciphers. There are also many studies focusing on the Advanced Encryption Standard (AES), presenting methods of making block ciphers dynamic to improve their security. Animating methods can perform block cipher transformations such as substitution or permutation, or both. In this article, we propose an algorithm to create new, key-dependent XOR tables from an initial secret key. At the same time, we prove that in the cipher text the new XOR operation can preserve the independent, co-probability distribution of the random key. We then apply these new XOR tables to make AES dynamic at the Addroundkey transformation. We created a considerable number of XOR tables, about (16!) 2 tables. With such a vast number of key-dependent dynamic XOR tables, cryptanalysts will have great difficulty finding the actual XOR table used in the modified AES block cipher. Therefore, with our new XOR tables, AES will be significantly enhanced.
The Know Your Customer (KYC) process is a fundamental prerequisite for any financial institution’s compliance with the regulatory framework. Blockchain technology has emerged as a revolutionary solution to enhance the effectiveness of the KYC procedure. It ensures that the KYC process is transparent, secure, and immutable, thereby offering a robust solution to combat fraudulent activities. The potential of blockchain technology in revolutionizing the KYC process has been acknowledged globally. Blockchain technology provides a decentralized platform for storing customer data, enabling financial institutions to access the information seamlessly. Using ethereum blockchain technology in KYC procedures can enhance the efficiency of financial institutions, significantly reducing the time and cost associated with the process. This work aims to provide a viable and sustainable solution to the challenges that banks experience in implementing KYC procedures and onboarding new customers. The proposed solution involves the central bank maintaining a comprehensive register of all registered banks while closely monitoring their adherence to the existing regulations governing KYC and customer acquisition.
Cloud computing platform offers numerous applications and resources such as data storage, databases, and network building. However, efficient task scheduling is crucial for maximizing the overall execution time. In this study, workflows are used as datasets to compare scheduling algorithms, including Shortest Job First, First Come, First Served, (DVFS) and Energy Management Algorithms (EMA). To facilitate comparison, the number of virtual machines in the Visual Studio .Net framework environment is used for the implementation. The experimental findings indicate that increasing the number of virtual machines reduces Makespan. Moreover, the Energy Management Algorithm (EMA) outperforms Shortest Job First by 2.79% for the CyberShake process and surpasses the First Come, First Serve algorithm by 12.28%. Additionally, EMA produces 21.88% better results than both algorithms combined. For the Montage process, EMA performs 4.50% better than Shortest Job First and 25.75% superior to the First Come, First Serve policy. Finally, we ran simulations to determine the performance of the suggested mechanism and contrasted it with the widely used energy-efficient techniques. The simulation results demonstrate that the suggested structural design may successfully reduce the amount of data and give suitable scheduling to the cloud
The concern and opposition to centralized management of user identities by third-party authorities have led to the emergence of Self-Sovereign Identity (SSI). With the help of blockchain, SSI effectively restore users’ control over their own identities, but privacy concerns due to blockchain transparency, coupled with the absence of accountability like Know Your Customer (KYC) and Anti-Money Laundering (AML), have cast uncertainties upon the viability of SSI. In this paper, we bridge this gap by introducing AASSI, a pioneering SSI protocol meticulously designed to balance the twin imperatives of privacy and accountability. Specifically, AASSI extends support for anonymity, self-derivation, fine grained tracing and selective revocation. In the realm of anonymity and self-derivation, AASSI introduces redactable signatures, which empowers users to autonomously derive distinctive credentials for each user service-provider interaction, effectively improving privacy protection and self-management capabilities. Pertaining to fine-grained tracing, the protocol employs a dual-tag system that facilitates tracing users’ real identities as well as a granular historical records of derived credentials. For selective revocation, AASSI leverages dynamic accumulators as a building block to enable the revocation of offending users. Ultimately, we provide functional comparison, which shows that AASSI has robust features, in particular, it supports novel self-derived features that further increase user control over their identity, and fine-grained tracking, which provides more flexibility for tracers. We demonstrate protocol efficiency through off chain time-consuming comparison, which show that AASSI dramatically reduces the time overhead of credential verification. Moreover, We test the on-chain communication overhead and experimentally prove the feasibility of AASSI.
In the ever-evolving technological landscape, ensuring high system availability has become a paramount concern. This research paper focuses on cloud computing, a domain witnessing exponential growth and emerging as a critical use case for high-availability systems. To fulfil the criteria, many services in cloud infrastructures should be combined, relying on the user’s demands. Central to this study is load balancing, an integral element in harnessing the full potential of heterogeneous computing systems. In cloud environments, dynamic management of load balancing is crucial. This study explores how virtual machines can effectively remap resources in response to fluctuating loads dynamically, optimizing overall network performance. The core of this research involves an in-depth analysis of several metaheuristic algorithms applied to load balancing in cloud computing. These include Genetic Algorithm, Particle Swarm Optimization, Ant Colony Optimization, Artificial Bee Colony, and Grey Wolf Optimization. Utilizing Cloud Analyst, the study conducts a comparative analysis of these techniques, focusing on key performance metrics such as Total Response Time (TRT) and Data Center Processing Time (DCPT). The findings of this research offer insights into the varying behaviors of these algorithms under different cloud configurations and user retention levels. The ultimate aim is to pave the way for developing innovative load-balancing strategies in cloud computing. By providing a comprehensive evaluation of existing metaheuristic methods, this paper contributes to advancing high-availability systems, underscoring the importance of tailored solutions in the dynamic realm of cloud technology.
With the development of data science, AI and data transaction, an increasing number of users are utilizing multi-party data for federated machine learning to obtain their desired models. Therefore, scholars have proposed numerous federated learning frameworks to address practical issues. However, there are still three issues that need to be addressed in current federated learning frameworks: 1) privacy protection, 2) poisoning attack, and 3) protection of the interests of participants. To address these issues, this paper proposes a novel federated learning framework MS-FL based on multiple security strategies. The framework’s algorithms guarantee that data providers need not worry about data privacy leakage. At the same time, it can defend against poisoning attack from malicious nodes. Finally, to ensure the interests of all parties are protected, a blockchain protocol is utilized. The theoretical derivation proves the effectiveness of this framework. Experimental results show that the algorithm designed in this paper outperforms other algorithms.
In our data-centric society, major service providers have access to vast amounts of user information for convenient and efficient services. There are privacy implications when users authorize share personal data managed by service providers. To make authorization private and controllable, in this paper, we propose a private authorization scheme oriented service providers. A decentralized publicly-verifiable re-encryption method based on IPFS is proposed to minimize the reliance on service providers, by shifting to a distributed storage and computation model. Besides, we propose a trustless authorization authentication method that hides the authorization relationship to protect user privacy. We also evaluate the security of our scheme, as well as its performance to demonstrate utility.
Cloud computing has been a prominent technology that allows users to store their data and outsource intensive computations. However, users of cloud services are also concerned about protecting the confidentiality of their data against attacks that can leak sensitive information. Although traditional cryptography can be used to protect static data or data being transmitted over a network, it does not support processing of encrypted data. Homomorphic encryption can be used to allow processing directly on encrypted data, but a dishonest cloud provider can alter the computations performed, thus violating the integrity of the results. To overcome these issues, we propose PEEV (Parse, Encrypt, Execute, Verify), a framework that allows a developer with no background in cryptography to write programs operating on encrypted data, outsource computations to a remote server, and verify the correctness of the computations. The proposed framework relies on homomorphic encryption techniques as well as zero-knowledge proofs to achieve verifiable privacy-preserving computation. It supports practical deployments with low performance overheads and allows developers to express their encrypted programs in a high-level language, abstracting away the complexities of encryption and verification.
Searchable encryption is widely used in electronic health record systems because it enables users to search ciphertext data without decryption. However, the existing traditional searchable encryption schemes lack fine-grained access policies with wildcards in electronic health record systems. And they also do not consider the problem of hiding policies, as well as the problem of incomplete search results caused by cloud servers. In order to solve all the above problems, this paper proposes a blockchain-aided attribute-based searchable scheme with the properties of inner product predicate. In the proposed scheme, the attribute encryption mechanism implements fine-grained access policies with wildcards, which improves the data owner’s ability to control access authorization precisely. Introducing the inner product predicate not only achieves fully hidden access policies but also prevents the leakage of sensitive medical data. The immutability of the blockchain ensures the integrity of multi-keyword search results, guaranteeing reliable data sharing. Finally, the security proof and the performance evaluation are conducted to confirm the security and effectiveness of the proposed scheme.
Connecting smart industrial components to computer networks revolutionizes business operations. However, in the Industrial Internet of Things, the sharing of data has bandwidth, computational, and privacy issues. Researchers presented cloud computing and fine-grained access control to overcome these challenges. However, traditional centralized computing systems involve single points of failure. To mitigate these challenges, we have proposed a secure and incentive-based data-sharing framework for systems using block chain technology. We leverage block chain due to its ability to provide secure and tamper-resistant data storage and sharing as participants store their data on a distributed ledger (DL), preventing unauthorized access. A security protocol is designed that utilizes the properties of elliptic curve cryptography (ECC). Moreover, Shapley value is employed to calculate revenue and distribute it fairly. To perform the formal security evaluation, we have conducted extensive simulations using the Automated Validation of Internet Security Protocols and Applications (AVISPA) and Scyther protocol simulation tools, which demonstrated that our protocol is robust against various adversarial attacks. The experimental results show that the proposed incentive distribution framework demonstrated fairness in the distribution of revenue among participants.
The edge devices will produce enormous quantities of data daily as the Industrial Internet of Things (IIoT) expands in scope. Still, most data is stored in data centers, making it challenging to transfer data between domains safely. Smart logistic products have dramatically changed due to the prevalence of decentralized edge computing and blockchain in the industry sector. To address the need to exchange data between logistics networks, we proposed a novel decentralized hierarchical attribute-based encryption (HABE) scheme combining edge computing and blockchain. To begin, we offer an data encryption strategy in which edge devices can send data to a nearby cloud network for data processing while maintaining privacy. In addition, we developed a blockchain-integrated data-sharing scheme that makes it possible for users to share data via the use of edge and cloud storage. In particular, an data incorporates an encryption-based authentication system to verify users’ access rights at the network’s periphery in a decentralized manner. Using HABE, we provide a blockchain-integrated architecture for the data that protects user privacy. The suggested design utilizes the edge and cloud network paradigms and HABE to maintain privacy and works well with smart logistics applications. The authentication time of the proposed model is reduced by 1.5 times compared with the centralized model. The analyses and experimental findings show that the proposed blockchain-integrated edge computing architecture is better than the existing schemes in terms of data sharing, data privacy, and security.
The current publishing landscape grapples with opacity in the review process. In response, a proposal for a blockchain-driven system is put forth to establish transparent and auditable records for evaluations. However, despite its decentralized nature, concerns persist regarding confidentiality and secure data sharing, crucial for fostering future collaborations. To address these challenges, this study advocates for the implementation of an access control mechanism (ACM) to safeguard confidentiality. Under this mechanism, only the assigned reviewer would have access to the confidential manuscript, ensuring the integrity of the review process. In scientific collaborations, the imperative for confidential data sharing extends beyond reproducibility to encompass vital collaborative endeavors such as publications, Memorandums of Understanding (MoUs), grants, and funding. While hierarchical ACM may prove insufficient in defending confidential data, a more nuanced approach is proposed, leveraging a fine-grained access control model that considers contextual opinions, embodied in the concept of Zero Trust Architecture. Additionally, an incentivization mechanism based on author feedback is proposed to bolster reviewer engagement and credibility. In summary, this study aims to tackle trust and confidentiality concerns within the review system, facilitating secure data sharing for future collaborations while enhancing the credibility of reviewers. By advocating for a transition towards decentralized scientific collaboration and review processes, this work underscores the importance of integrating confidential review and data sharing practices, thereby fortifying the scholarly ecosystem.
The implementation of the Know Your Customer (KYC) strategy by banks within the financial sector enhances the operational efficiency of such establishments. The data gathered from the client during the KYC procedure may be applied to deter possible fraudulent activities, money laundering, and other criminal undertakings. The majority of financial institutions implement their own KYC procedures. Furthermore, a centralized system permits collaboration and operation execution by multiple financial institutions. Aside from these two scenarios, KYC processes can also be executed via a blockchain-based system. The blockchain’s decentralized network would be highly transparent, facilitating the validation and verification of customer data in real-time for all relevant stakeholders. In addition, the immutability and cryptography of the blockchain ensure that client information is secure and immutable, thereby eradicating the risk of data breaches. Blockchain-based KYC can further improve the client experience by eliminating the requirement for redundant paperwork and document submissions. After banks grant consumers transactions, a blockchain-based KYC system is proposed in this study to collect limit, risk, and collateral information from them. The approach built upon Ethereum grants financial institutions the ability to read and write financial data on the blockchain network. This KYC method establishes a transparent, dynamic, and expeditious framework among financial institutions. In addition, solutions are discussed for the Sybil attack, one of the most severe problems in such networks.
Despite the maturity of cloud services (e.g., outsourcing of computational tasks), a number of operational challenges remain. For example, how do we ensure trust between outsourcers and workers in a zero-trust environment? While a number of blockchain-based solutions that eliminate the reliance on trusted third parties have been presented, many of these existing approaches do not achieve robust fairness and/or support compatibility with other systems. In this paper, we propose an efficient fair payment system using blockchain (EFPB), designed to achieve robust fairness and compatibility. Specifically, EFPB comprises a number of cryptographic building blocks, mainly: one-way accumulator (RSA-based construction), stealth address and symmetric encryption. We then evaluate the performance of EFPB to demonstrate that it is more efficient and low-cost than other competing schemes, as well as presenting a comparative summary of functionalities.
IoT-based remote health monitoring is a promising technology to support patients who are unable to travel to medical facilities. Due to the sensitivity of health data, it is important to secure it against all possible threats. While a great deal of work has been done to secure IoT device-cloud communication and health records on the cloud, insider attacks remain a significant challenge. Malicious insiders may tamper, steal or change patients’ health data, which results in a loss of patient trust in these systems. Audit logs in the cloud, which may point to illegal data access, may also be erased or forged by malicious insiders as they tend to have technical knowledge and privileged access to the system. Thus, in this work, we propose a Cloud Access Security Broker (CASB) model that (a) logs every action performed on user data and (b) secures those logs by placing them in a private blockchain that is viewable by the data owners (i.e., patients). Patients can query the blockchain, track their data’s movement, and be alerted if their data has been accessed by an administrator or moved outside the cloud storage. In this work, we practically implement a web application that receives health data from patients, a CASB that securely stores the records in the cloud, and integrate a private blockchain that immediately logs all actions happening in the backend of the web application and CASB. We evaluate the system’s security and performance under varying numbers of patients and actions.
Building smart services for smart cities has become a significant focus of the Internet of Things (IoT). These IoT devices are able to sense their surroundings and react appropriately. Smart city applications emphasize the necessity of safe data sharing across heterogeneous devices. Certain behaviors taken while sharing could aim at compromising security, privacy, and integrity. The centralized repository that is currently in place made the majority of hacks possible. The sharing of sensitive data and authentication are essential stages in guaranteeing the security of applications associated with IoT. Blockchain and IoT are two widely used technologies, with IoT focusing on data collection via various devices and blockchain enabling data integrity. This paper introduces a novel blockchain-based framework to ensure the security and integrity aspects of IoT data. The proposed SecPrivPreserve framework ensures security through various phases including initialization, registration, data protection, authentication, data access control, validation, and data sharing and download. Diverse security mechanisms such as passwords (OTP), encryption, and hashing have been deployed in various phases to strengthen security merits confidentiality, privacy, and integrity. Since the SecPrivPreserve framework is simulated in a permissioned blockchain platform the merits and tamper-proof and non-repudiation are automatically considered. Moreover, data protection uses Chebyshev polynomials and interpolation. The presented framework has experimented with Fabric SDK. The experimental results of the proposed framework are compared with the BaseLine state-of-the-frameworks, The experimental analysis reveals that the proposed SecPrivPreserve approach achieved 34 Sec improvement in terms of responsiveness 94 Sec as computational time, encryption quality as 0.87 Sec and 0.82 Sec for detection rate.
At present, the rapid development of big data technology and the Internet of Things has led to the generation of spatiotemporal data streams on a substantial scale. In this context, data holders often opt for cloud storage of their data to mitigate the pressure of local storage and computing overhead. However, this centralized storage mode renders the data susceptible to risks of tampering and leakage due to the absence of physical control over spatiotemporal big data. Therefore, this paper devises a blockchain-based spatiotemporal big data privacy protection scheme. Initially, to ensure data privacy, the scheme adopts a storage mode where the public and side chains are coordinated. Subsequently, the irreversible SHA256 algorithm is utilized to authenticate the access nodes. Finally, AES and RSA hybrid encryption are applied to encrypt the collected spatiotemporal big data, further enhancing the security of private data. The security performance of the system was evaluated by comparing the success rate of attacks on the blockchain under various circumstances. Additionally, the system throughput of this scheme was analyzed through simulation, revealing that the service life of this system extends to a minimum of 13.79 years when 100 nodes are used. In this context, the system throughput achieves 1607 transactions/s, each node hosts 1.2107 blocks, and querying specific spatiotemporal data takes less than 2 seconds.
In today’s interconnected world, identity authentication and key agreement are important links in the secure communication process of IoT terminal devices. In the edge computing environment, with the frequent cross-domain authentication and data sharing of IoT devices in different security domains, identity authentication faces a series of challenges and security issues. Most of the traditional identity authentication methods are based on public key infrastructure, which is prone to single point of failure and is not applicable to the distributed architecture of edge computing. In this article, we apply blockchain technology to the identity authentication and key agreement process of IoT terminal devices. In order to meet cross-domain requests from terminal devices in different security domains, a multi-layer blockchain authentication architecture is designed. The hash value of the digital certificate is stored on the blockchain and combined with dynamic accumulator technology to enhance the reliability and authentication efficiency of the digital certificate. Security analysis and experimental results demonstrate that our scheme can achieve efficient and secure authentication and key agreement.
Online cloud data storage is a rapidly growing pillar of the IT industry that offers data owners an array of attractive developments in highly sought-after online scalable storage services. Cloud users can easily access these services and have the flexibility to manage their process data effectively without worrying about the deployment or maintenance of personal storage devices. As a result, the number of cloud users has increased to purchase these convenient and cost-effective services, while Cloud Service Providers (CSP) are also rising to meet this demand for appealing cloud solutions. However, there is one major security issue related to outsourced data on shared cloud storage: its privacy and accuracy cannot be guaranteed as it may be vulnerable to unauthorized access by malicious insiders or hackers from outside sources. To address these issues, we suggest proposing a partial signature-based data auditing system so that both privacy and accuracy can be fortified while reducing the computational cost associated with auditing processes significantly. This system would involve using cryptographic techniques such as homomorphic encryption and hash functions, which would enable secure sharing between multiple parties while ensuring integrity checks on stored files at regular intervals for any potential tampering attempts made by external attackers or malicious insiders who may try to gain unauthorized access into confidential user information stored within cloud sites. Another benefit of the plan is that it supports dynamic operation on outsourced data. This research work may achieve the desired security qualities, according to the security analysis, and it is effective for real-world applications, as demonstrated by simulation outcomes of dynamic operations on various numbers of data blocks and sub-blocks.
As one of the essential steps to secure government data sharing, Identity Authentication (IA) plays a vital role in the processing of large data. However, the centralized IA scheme based on a trusted third party presents problems of information leakage and single point of failure, and those related to key escrow. Therefore, herein, an effective IA model based on multi attribute centers is designed. First, a private key of each attribute of a data requester is generated by the attribute authorization center. After obtaining the private key of attribute, the data requester generates a personal private key. Second, a dynamic key generation algorithm is proposed, which combines blockchain and smart contracts to periodically update the key of a data requester to prevent theft by external attackers, ensure the traceability of IA, and reduce the risk of privacy leakage. Third, the combination of blockchain and interplanetary file systems is used to store attribute field information of the data requester to further reduce the cost of blockchain information storage and improve the effectiveness of information storage. Experimental results show that the proposed model ensures the privacy and security of identity information and outperforms similar authentication models in terms of computational and communication costs.