Site Loader

Data sharing in cloud storage is receiving
substantial attention in Information Communications Technology, since it can
provide users with efficient and effective storage services. To protect the
confidentiality of the shared sensitive data, the cryptographic techniques are
usually applied. However, the data protection is still posing significant
challenges in cloud storage for data sharing. Among them, how to protect and
revoke the cryptographic key is the fundamental challenge. To tackle this, we propose
a new data protection mechanism for cloud storage, which holds the following
properties.1) The cryptographic key is protected by the two factors. Only if
one of the two factors works, the secrecy of the cryptographic key is held. 2) The
cryptographic key can be revoked efficiently by integrating the proxy
re-encryption and key separation techniques.  3) The data is protected in a
fine-grained way by adopting the attribute based encryption technique.
Furthermore, the security analysis and performance evaluation show that our
proposal is secure and efficient, respectively.

Keywords- cloud computing, privacy,
security, attribute based key, encryption, decryption

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

I.                  
INTRODUCTION

DRIVEN
by the attractive on-demand features and advantages, the development and
deployment of cloud-based applications have gained tremendous impetus in the
industry and research community in recent years. Cloud storage is one of the
most successful cloud-based applications, since it matches the huge data
sharing demand quite well. Sharing huge data with several data sharers is a
cost-consuming task, and the cost on the data owner side is usually
proportional to the number of data sharers. While this cost could be reduced to
the size of shared data with the help of cloud storage. The only thing the data
sharer needs to do is to upload the data to the cloud and grant the access
right to the data sharer. After that, data sharers can obtain the data from the
cloud instead of the data owner. Despite the benefits of data sharing in cloud
storage, it also introduces many chances to the adversary to access the shared data
without authorization. To protect the confidentiality of the shared data, the
cryptographic schemes are usually applied. The security of cryptographic
schemes stem from the security of underlying cryptographic key. Currently, the cryptographic
key is simply stored in the computer in most of existing         cryptographic schemes. While it has been reported that the
stored keys can be revealed by some viruses .To deal with the key exposure
problem, many techniques have been proposed, such as key-insulated public key
technique, and parallel key insulated public key technique. To the best of our
knowledge, the cryptographic key exposure and revocation problems in cloud
storage are unrevealed till the work by Liu et al. named LLS+15 afterwards). In,
they proposed a novel two-factor data protection mechanism. The cryptographic
key is divided into two parts. One is kept in user’s computer and the other is
stored in a security device (e.g., smart card), which is similar to the
e-banking. Only if one of these two parts are kept secret from the adversary,
the confidentiality of the

cryptographic
key is held. Hence, the “two-factor” is named. Furthermore, once the user’s
security device was either lost or stolen, it could be revoked by using the
proxy re-encryption technique. While LLS+15 aims to solve the security problem
of the data storage but not the data sharing scenario in cloud computing.
Especially, one ciphertext in LLS+15 is essentially an identity-based
ciphertext that can be decrypted by only one user but not a group of users as
in data sharing scenario. Recently, the data sharing is rising a heated
concern. While privacy is still the key concern and an equally striking
challenge that reduce the growth of data sharing in cloud

 

 I.           
CLOUD
SECURITY TECHNIQUES

2.1
Delay-Optimized File Retrieval under LT-Based Cloud
Storage

 

Luby Transform (LT)
code is one of the popular fountain codes for storage systems due to its
efficient recovery. In this paper, it is that multiple stage retrieval of
fragments is effective to reduce the file-retrieval delay. In this first
develop a delay model for various multiple stage retrieval schemes applicable
to the considered system.This paper, it is focused on the file-retrieval delay,
defined as the duration between the time for the portal receiving an LT-coded
file request and the time when the last LT-coded packet is sent out by the
portal. The file-retrieval delay is a good indicator of user experience. We
formulate a delay-optimal file-retrieval problem, which aims to minimize the
retrieval delay by strategically scheduling packet retrieval requests.
Therefore, we aim to reduce the file-retrieval delay by strategically
scheduling the LT-coded packet requests. In our proposed multi-stage request
scheme, the designing objective is to minimize the average file-retrieval1
delay, for a given number of stages.

            The
issue with this protocol is that the problem of
delay optimal file-retrieval under a distributed cloud storage system is
rectified. Using this model, we derived an optimal two-stage request scheme for
a given decoding probability. Both simulation and numerical results confirm
that this optimal scheme can reduce the average delay dramatically. The
analysis offers a way for storage system operators to design an optimized
storage retrieval scheme for LT-based distributed cloud storage systems. Using
this model, we derived an optimal two-stage request scheme for a given decoding
probability. Both simulation and numerical results confirm that this optimal
scheme can reduce the average delay dramatically.

 

2.2
Quick Sync: Improving Synchronization Efficiency
for Mobile Cloud Storage Services

 

Mobile cloud
storage services have gained great success in recent few years. In this paper,
identify, analyse, and address the synchronization inefficiency problem of
modern mobile cloud storage services. The results demonstrate that existing
commercial sync services fail to make full use of available bandwidth, and
generate a large amount of unnecessary sync traffic in certain circumstances
even though the incremental sync is implemented. Based on the findings, it is
proposed Quick Sync2, a system with three novel techniques to improve the
sync efficiency for mobile cloud storage services, and build the system on two
commercial sync services. We further evaluate the capability of the Batched
Synced in improving the bandwidth utilization efficiency. Finally the overall
improvement of the sync efficiency using real-world workloads. If we compare
the performances of the original Sea file and Drop box clients with those when
the two service frameworks are improved with Quick Sync To address the inefficiency
issues, it is proposed a Quick Sync, a system with three novel techniques.
Quick Sync to support the sync operation with Drop box and Sea file. Our
extensive evaluations demonstrate that Quick Sync can effectively save the sync
time and reduce the significant traffic overhead for representative sync
workloads

 

2.3
Dynamic-Hash-Table Based Public Auditing for Secure
Cloud Storage

Dynamic hash
table3, which is a new two-dimensional data structure located at a third
parity auditor to record the data property information for dynamic auditing.
Differing from the existing works, the proposed scheme migrates the authorized
information from the CSP to the TPA, and thereby significantly reduces the
computational cost and communication overhead. To support privacy preservation
by combining the homomorphism authenticator based on the public key with the
random masking generated by the TPA, and achieve batch auditing by employing
the aggregate BLS signature technique. The proposed scheme can effectively
achieve secure auditing for cloud storage, and outperforms the previous schemes
in computation complexity, storage costs and communication overhead.

In addition, for
privacy preservation, it introduces a random masking provided by the TPA into
the process of generating proof to blind the data information. It further
exploits the aggregate BLS signature technique from bilinear maps to perform
multiple auditing tasks simultaneously, of which the principle is to aggregate
all the signatures by different users on various data blocks into a single
short one and verify it for only one time to reduce the communication cost in
the verification process. Thus, it may be a new trend to design a more
effective scheme, including different audit strategies for various types of cloud
data.

 

 

2.4
KSF-OABE: Outsourced Attribute-Based Encryption with
Keyword       Search Function for Cloud
Storage

Attribute-based encryption technology4 has been used
to design fine-grained access control system, which provides one good method to
solve the security issues in cloud setting. 
Outsourced ABE with fine-grained access control system can largely
reduce the computation cost for users who want to access encrypted data stored
in cloud by outsourcing the heavy computation to cloud service provider. As the
amount of encrypted files stored in cloud is becoming very huge, which will
hinder efficient query processing. To deal with above problem, a new
cryptographic primitive called attribute-based encryption scheme with
outsourcing key-issuing and outsourcing decryption, which can implement keyword
search function (KSFOABE). The
time-consuming pairing operation can be outsourced to the cloud service
provider, while the slight operations can be done by users. Thus, the
computation cost at both users and trusted authority sides is minimized. The
proposed scheme supports the function of keywords search which can greatly
improve communication efficiency and further protect the security and privacy
of users.

 

2.5
Minimum-Cost Cloud Storage Service Across Multiple Cloud Providers

Many cloud service providers provide data
storage services with data centres distributed worldwide. These data centers5
provide different get/put latencies and unit prices for resource utilization
and reservation. Then we propose three enhancement methods to reduce the
payment cost and service latency: 1) coefficient-based data reallocation; 2) multicast-based
data transferring and 3) request redirection-based congestion control. According to the operations of a customer’s clients,
the customer data center generates

read/write requests to a storage datacenter storing the requested data.

 For a customer, DAR aims to find a
schedule that allocates each data item to a number of selected datacenters,
allocates request serving ratios to these datacenters and determines
reservation in order to guarantee the SLO and minimize the payment cost of the
customer

This work aims
to minimize the payment cost of customers while guarantee their SLOs by using
the worldwide distributed data centers belonging to different CSPs with
different resource unit prices. In the first model this cost minimization problem
using integer programming. Due to its NP-hardness, so they introduced the DAR
system as a heuristic solution to this problem, which includes a
dominant-cost based data allocation algorithm among storage data centers and an
optimal resource reservation algorithm to reduce the cost of each storage data
center.

2.6
An Economical and SLO-Guaranteed Cloud Storage Service
Across Multiple Cloud Service Providers

A multi-cloud Economical and SLO-guaranteed Storage
Service6, which determines data allocation and resource reservation schedules
with payment cost minimization and SLO guarantee. ES3 incorporates a
coordinated data allocation and resource reservation method, which allocates
each data item to a datacenter and determines the resource reservation amount
on datacenters by leveraging all the pricing policies, a genetic algorithm
based data allocation adjustment method, which reduce data Get/Put rate
variance in each datacenter to maximize the reservation benefit. The problem to
find the optimal data allocation and resource reservation schedules for cost
minimization and SLO guarantee using an integer programming is done by Payment
Minimization Objective method. They propose a multi-cloud Economical and
SLO-guaranteed cloud Storage Service for a cloud broker over multiple CSPs that
provides SLO guarantee and cost minimization even under the Get rate variation.
ES3 is more advantageous than previous methods in that it fully utilizes
different pricing policies and considers request rate variance in minimizing the
payment cost. ES3 has a data allocation and reservation method and a GA-based
data allocation adjustment method to guarantee the SLO and minimize the payment
cost.

 

2.7 ASSER: An Efficient, Reliable, and
Cost-Effective Storage Scheme for Object-Based Cloud Storage Systems

 

An ASSembling chain of Erasure coding and Replication.
ASSER7 stores each object in two parts: a full copy and a certain amount of
erasure-coded segments. We establish dedicated read/write protocols for ASSER
leveraging the unique structural advantages. On the basis of elementary
protocols, we implement sequential and PRAM consistency to make ASSER feasible
for various services with different performance/consistency requirements.
Evaluation results demonstrate that under the same fault tolerance and
consistency level, ASSER outperforms N-way replication and pure erasure coding
in I/O throughput under diverse system and workload configurations with
superior performance stability. ASSER delivers stably efficient I/O performance
at much lower storage cost than the other comparatives. MPL is an extended
mechanism of prevalently-adopted Parity Logging technique. The benefits brought
by MPL mechanism are two folded. First, parity logging facilitates efficient
handling towards update requests. The segment-chain in ASSER takes charge of
receiving and handling update requests, thus reducing the amount of disk space
needed to be  overwritten. Second,
introducing multi version into traditional parity logging enables ASSER to
naturally support multiple consistency levels. Each object can have more than
one recoverable versions in ASSER, and whether a version is valid to return is
determined by the consistency level that ASSER is configured. ASSER, a hybrid
storage scheme that aims at providing balanced trade-off between I/O
performance and space efficiency with low storage cost. They proposed a
mechanism called multiversional parity logging to facilitate efficient
read/write handling in ASSER. We evaluated the performance of ASSER and the
robustness of its implementation. According to their experimental results, with
only half of extra space overhead, ASSER outperformed CRAQ in write-heavier
workload and stayed evenly matched in read-heavier workload.  Finally, they verified the feasibility of
ASSER in practical environment through real-world traces driven experiment.

 

2.8
Key-Aggregate Cryptosystem for Scalable Data Sharing in Cloud Storage

It
shows how to securely, efficiently, and flexibly share data with others in
cloud storage. We describe new public-key cryptosystems8 that produce
constant-size cipher texts such that efficient delegation of decryption rights
for any set of cipher texts are possible. The novelty is that one can aggregate
any set of secret keys and make them as compact as a single key, but
encompassing the power of all the keys being aggregated.

This
compact aggregate key can be conveniently sent to others or be stored in a
smart card with very limited secure storage. We provide formal security
analysis of our schemes in the standard model. We also describe other
application of our schemes. In particular, our schemes give the first
public-key patient-controlled encryption for flexible hierarchy, which was yet
to be known.

The
design is based on the collusion-resistant broadcast encryption scheme proposed
by Boneh et al.  Although their scheme
supports constant-size secret keys, every key only has the power for decrypting
cipher texts associated to a particular index. We, thus, need to devise a new
Extract algorithm and corresponding Decrypt algorithm.

 

2.9
Anonymous and Traceable Group Data
Sharing in Cloud Computing

With
cloud computing, how to achieve secure and efficient data sharing in cloud
environments is an issue to be solved. In addition, how to achieve both
anonymity and traceability is also a challenge in the cloud for data sharing.
In this paper enabling data sharing and storage for the same group in the cloud
with high security and efficiency in an anonymous manner9. By leveraging the
key agreement and the group signature, a novel traceable group data sharing
scheme is proposed to support anonymous multiple users in public clouds is been
focused. On the one hand, group members can communicate anonymously with
respect to the group signature, and the real identities of members can be
traced if necessary. On the other hand, a common conference key is derived based
on the key agreement to enable group members to share and store their data
securely.

The
architecture of our cloud computing scheme is considered by combining the
system model contains three entities: cloud, group manager and group members. Cloud
provides users with seemingly unlimited storage services. In addition to
providing efficient and convenient storage services for users, the cloud can
also provide data sharing services. The cloud will not deliberately delete or
modify the uploaded data of users, but it will be curious to understand the
contents of the stored data and the user’s identity. The cloud is a semi
trusted party in our scheme.

By
presenting a secure and fault-tolerant key agreement for group data sharing in
a cloud storage scheme. Based on the SBIBD and group signature technique, the
proposed approach can generate a common conference key efficiently, which can
be used to protect the security of the outsourced data and support secure group
data sharing in the cloud

 

2.10
Block Design-based Key Agreement for Group Data Sharing inCloud Computing

Data
sharing in cloud computing enables multiple participants to freely share the
group data, By taking advantage of the symmetric balanced incomplete block
design 10, we present a novel block design-based key agreement protocol that
supports multiple participants, which can flexibly extend the number of
participants in a cloud environment according to the structure of the block
design. Based on the proposed group data sharing model, we present general
formulas for generating the common conference key K for multiple participants.

To
support a group data sharing scheme for multiple participants applying an
SBIBD, we design an algorithm to construct the (v; k +1; 1)-design. Moreover,
the constructed (v; k + 1; 1)-design requires some transformations to establish
the group data sharing model such that v participants can perform the key
agreement protocol.

With
the help of the conference key agreement protocol, the security and efficiency
of group data sharing in cloud computing is been greatly improved .The block
design-based key agreement protocol that supports group data sharing in cloud
computing. Due to the definition and the mathematical descriptions of the
structure of a (v; k + 1; 1)- design, multiple participants can be involved in
the protocol and general formulas of the common conference key for participant
are been derived

 

2.11
Data Security for Cloud Environment with
Semi-Trusted Third Party

Data security for cloud environment with semi-trusted third
party (DaSCE)11, a data security system that provides key management, access
control, and file assured deletion. The DaSCE utilizes Shamir’s (k, n)
threshold scheme to manage the keys, where k out of n shares are required to
generate the key. We use multiple key managers, each hosting one share of key.
Multiple key managers avoid single point of failure for the cryptographic keys.

The DaSCE makes use of both symmetric and asymmetric keys. The
confidentiality and integrity services for data are provided through symmetric
keys that are secured by using asymmetric keys. Asymmetric key pairs are
generated by third party KMs.

We modeled and analyzed FADE. The analysis highlighted some
issues in key management of FADE. DaSCE improved key management and
authentication processes. The working of the DaSCE protocol was formally
analyzed using HLPN, SMT-Lib, and Z3 solver. The performance of the DaSCE was
evaluated based on the time consumption during file upload and download. The
results revealed that the DaSCE protocol can be practically used for clouds for
security of outsourced data. The fact that the DaSCE does not require any
protocol and implementation level changes at the cloud makes it highly
practical methodology for cloud.

Post Author: admin

x

Hi!
I'm Sonya!

Would you like to get a custom essay? How about receiving a customized one?

Check it out