TY - JOUR
T1 - (k, ε, σ)-Anonymization: Privacy-Preserving Data Release Based on k-Anonymity and Differential Privacy
AU - Tsou, Yao-Tung
AU - Alraja, Mansour Naser
AU - Chen, Li-Sheng
AU - Chang, Yu-Hsiang
AU - Hu, Yung-Li
AU - Huang, Yennun
AU - Yu, Chia-Mu
AU - Tsai, Pei-Yuan
PY - 2021/9
Y1 - 2021/9
N2 - The General Data Protection Regulation came into effect on May 25, 2018, and has rapidly become a touchstone model for modern privacy law. It empowers consumers with unprecedented control over the use of their personal information. However, new guarantees of consumer privacy adversely affect data sharing and data application markets because service companies (e.g., Apple, Google, Microsoft) cannot provide immediate and optimized services through analysis of collected consumer experiences. Therefore, data de-identification technology (e.g., k-anonymity and differential privacy) is a candidate solution to protect sharing data privacy. Various workarounds based on existing methods such as k-anonymity and differential privacy technologies have been proposed. However, they are limited in data utility, and their data sets have high dimensionality (the so-called curse of dimensionality). In this paper, we propose the (\(k,\varepsilon ,\delta \))-anonymization synthetic data set generation mechanism (called (\(k,\varepsilon ,\delta \))-anonymization for short) to protect data privacy before releasing data sets to be analyzed. Synthetic data sets generated by (\(k,\varepsilon ,\delta \))-anonymization satisfy the definitions of k-anonymity and differential privacy by applying KD-tree and random sampling mechanisms. Moreover, (\(k,\varepsilon ,\delta \))-anonymization uses principle component analysis to rationally replace high-dimensional data sets with lower-dimensional data sets for consideration of efficient computation. Finally, we confirm the relationships between parameters k, \(\varepsilon \), and \(\delta \) for k-anonymity and (\(\varepsilon ,\delta \))-differential privacy and estimate the utility of (\(k,\varepsilon ,\delta \))-anonymization synthetic data sets. We report a privacy analysis and a series of experiments that prove that (\(k,\varepsilon ,\delta \))-anonymization is feasible and efficient.
AB - The General Data Protection Regulation came into effect on May 25, 2018, and has rapidly become a touchstone model for modern privacy law. It empowers consumers with unprecedented control over the use of their personal information. However, new guarantees of consumer privacy adversely affect data sharing and data application markets because service companies (e.g., Apple, Google, Microsoft) cannot provide immediate and optimized services through analysis of collected consumer experiences. Therefore, data de-identification technology (e.g., k-anonymity and differential privacy) is a candidate solution to protect sharing data privacy. Various workarounds based on existing methods such as k-anonymity and differential privacy technologies have been proposed. However, they are limited in data utility, and their data sets have high dimensionality (the so-called curse of dimensionality). In this paper, we propose the (\(k,\varepsilon ,\delta \))-anonymization synthetic data set generation mechanism (called (\(k,\varepsilon ,\delta \))-anonymization for short) to protect data privacy before releasing data sets to be analyzed. Synthetic data sets generated by (\(k,\varepsilon ,\delta \))-anonymization satisfy the definitions of k-anonymity and differential privacy by applying KD-tree and random sampling mechanisms. Moreover, (\(k,\varepsilon ,\delta \))-anonymization uses principle component analysis to rationally replace high-dimensional data sets with lower-dimensional data sets for consideration of efficient computation. Finally, we confirm the relationships between parameters k, \(\varepsilon \), and \(\delta \) for k-anonymity and (\(\varepsilon ,\delta \))-differential privacy and estimate the utility of (\(k,\varepsilon ,\delta \))-anonymization synthetic data sets. We report a privacy analysis and a series of experiments that prove that (\(k,\varepsilon ,\delta \))-anonymization is feasible and efficient.
KW - differential privacy
KW - k-anonymity
KW - data privacy
KW - synthetic dataset
UR - https://link.springer.com/article/10.1007/s11761-021-00324-2
U2 - 10.1007/s11761-021-00324-2
DO - 10.1007/s11761-021-00324-2
M3 - Article
SN - 1863-2386
VL - 15
SP - 175
EP - 185
JO - Service Oriented Computing and Applications
JF - Service Oriented Computing and Applications
IS - 3
ER -