Tootfinder

Opt-in global Mastodon full text search. Join the index!

@patrick_townsend@infosec.exchange
2025-07-18 17:58:31

Protect your privacy – Start now with Signal
 
Autocracies always implement broad surveillance methods in order to identify and punish resistance. Surveillance can take many forms including the capture of your social media posts and email, monitoring your connections to web sites, and preventing the use of private communications through encryption back-doors and other means.
 
Take action now to create ways to communicate privately with your family, friends and colleague…

@arXiv_csCR_bot@mastoxiv.page
2025-06-18 08:44:23

Dual Protection Ring: User Profiling Via Differential Privacy and Service Dissemination Through Private Information Retrieval
Imdad Ullah, Najm Hassan, Tariq Ahamed Ahangar, Zawar Hussain Shah, Mehregan Mahdavi Andrew Levula
arxiv.org/abs/2506.13170

@arXiv_csCL_bot@mastoxiv.page
2025-06-19 08:16:39

Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers
Tommaso Green, Martin Gubri, Haritz Puerto, Sangdoo Yun, Seong Joon Oh
arxiv.org/abs/2506.15674

@arXiv_statME_bot@mastoxiv.page
2025-07-17 09:02:10

Fiducial Matching: Differentially Private Inference for Categorical Data
Ogonnaya Michael Romanus, Younes Boulaguiem, Roberto Molinari
arxiv.org/abs/2507.11762

@arXiv_csHC_bot@mastoxiv.page
2025-07-17 09:41:30

MExplore: an entity-based visual analytics approach for medical expertise acquisition
Xiao Pang, Yan Huang, Chang Liu, JiYuan Liu, MingYou Liu
arxiv.org/abs/2507.12337

@Dragofix@veganism.social
2025-06-11 23:21:23

Analysis: Private-Label Products Are Driving Plant-Based Retail Sales Growth in Four European Countries vegconomist.com/market-and-tre

@arXiv_csCR_bot@mastoxiv.page
2025-06-17 09:48:48

Versatile and Fast Location-Based Private Information Retrieval with Fully Homomorphic Encryption over the Torus
Joon Soo Yoo, Taeho Kim, Ji Won Yoon
arxiv.org/abs/2506.12761

@danyork@mastodon.social
2025-06-14 12:31:35

❓ for privacy-minded folks- anyone have a recommendation for a scale that measures more than weight (ex. muscle mass, fat %) but is also privacy-protecting?
And ideally has an “app” or interface that can track the info, see trends over time, etc.
Many choices online… but I have no idea where they are sending data (or to whom they are selling it).
Most private option of course is a scale that keeps all data local, but there is convenience in an app that easily tracks data, s…

A security researcher said
flaws in a carmaker’s online dealership portal
exposed the private information
and vehicle data of its customers,
and could have allowed hackers to remotely break into any of its customers’ vehicles.
Eaton Zveare, who works as a security researcher at software delivery company Harness,
told TechCrunch the flaw he discovered
allowed the creation of an admin account
that granted “unfettered access” to the unnamed carma…

@arXiv_csCY_bot@mastoxiv.page
2025-08-13 07:34:02

EU Digital Regulation and Guatemala: AI, 5G, and Cybersecurity
Victor Lopez Juarez
arxiv.org/abs/2508.08315 arxiv.org/pdf/2508.08315

@arXiv_csLG_bot@mastoxiv.page
2025-07-14 07:41:42

An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis
Ming Wang, Zhaoyang Duan, Dong Xue, Fangzhou Liu, Zhongheng Zhang
arxiv.org/abs/2507.08050 arxiv.org/pdf/2507.08050 arxiv.org/html/2507.08050
arXiv:2507.08050v1 Announce Type: new
Abstract: The labor-intensive nature of medical data annotation presents a significant challenge for respiratory disease diagnosis, resulting in a scarcity of high-quality labeled datasets in resource-constrained settings. Moreover, patient privacy concerns complicate the direct sharing of local medical data across institutions, and existing centralized data-driven approaches, which rely on amounts of available data, often compromise data privacy. This study proposes a federated few-shot learning framework with privacy-preserving mechanisms to address the issues of limited labeled data and privacy protection in diagnosing respiratory diseases. In particular, a meta-stochastic gradient descent algorithm is proposed to mitigate the overfitting problem that arises from insufficient data when employing traditional gradient descent methods for neural network training. Furthermore, to ensure data privacy against gradient leakage, differential privacy noise from a standard Gaussian distribution is integrated into the gradients during the training of private models with local data, thereby preventing the reconstruction of medical images. Given the impracticality of centralizing respiratory disease data dispersed across various medical institutions, a weighted average algorithm is employed to aggregate local diagnostic models from different clients, enhancing the adaptability of a model across diverse scenarios. Experimental results show that the proposed method yields compelling results with the implementation of differential privacy, while effectively diagnosing respiratory diseases using data from different structures, categories, and distributions.
toXiv_bot_toot

@cdp1337@social.veraciousnetwork.com
2025-07-15 01:01:28

Well this is fun, #Ohio HB96 seems that it would require me to pay to to collect and store your state issued ID if you want to browse Mastodon via this instance.
That's fine, I don't mind spending money which I don't have to collect users' private data including your home address and full legal name. You're fine with giving effectively a complete stranger acro…

@karlauerbach@sfba.social
2025-07-13 19:59:00

Remember the good old days when you could send an letter to the US Gov't at Fort Collins, Colorado. You'd include a self-addressed stamped envelope and a cassette tape and they would return the cassette with a recording of WWV so that you could set your clocks.
You think that is silly? Is it any more silly than that the maga-klan is trying to make US citizens buy weather (data that was acquired via US assets and taxes) from private dealers such as Accuweather?

@Techmeme@techhub.social
2025-06-02 11:11:06

Wall Street giants like Blackstone, KKR, and BlackRock are pouring hundreds of billions into AI data centers, creating concerns of "oversupply" and a bubble (Maureen Farrell/New York Times)
nytimes.com/2025/06/02/busines

@alejandrobdn@social.linux.pizza
2025-07-06 13:15:32

The EU wants to decrypt your private data by 2030 techradar.com/vpn/vpn-privacy-
The EU Commission unveiled the first step in its security strategy to ensure …

@arXiv_csCR_bot@mastoxiv.page
2025-06-17 10:22:09

Dual Protection Ring: User Profiling Via Differential Privacy and Service Dissemination Through Private Information Retrieval
Imdad Ullah, Najm Hassan, Tariq Ahamed Ahangar, Zawar Hussain Shah, Mehregan Mahdavi Andrew Levula
arxiv.org/abs/2506.13170

@paulwermer@sfba.social
2025-07-08 16:30:11

Maybe it's time for security forces to understand that information security is a) also important and b) easily compromised. The Strava vulnerability is a known issue - I saw news articles about that a few years ago. But surely by now the security world should know that any app that shares information on an unsecure system is a risk?
And I wonder what a review of how this is playing out in the US would show. Inquiring minds, and all that.

@PaulWermer@sfba.social
2025-07-08 16:30:11

Maybe it's time for security forces to understand that information security is a) also important and b) easily compromised. The Strava vulnerability is a known issue - I saw news articles about that a few years ago. But surely by now the security world should know that any app that shares information on an unsecure system is a risk?
And I wonder what a review of how this is playing out in the US would show. Inquiring minds, and all that.

@arXiv_statME_bot@mastoxiv.page
2025-07-16 09:13:51

Optimal Debiased Inference on Privatized Data via Indirect Estimation and Parametric Bootstrap
Zhanyu Wang, Arin Chang, Jordan Awan
arxiv.org/abs/2507.10746

@metacurity@infosec.exchange
2025-06-25 10:29:43

"Scale AI left public Google Docs with sensitive details about thousands of its contractors, including their private email addresses and whether they were suspected of
'cheating.'"
africa.businessinsider.co…

@toxi@mastodon.thi.ng
2025-08-03 08:21:59

The scale of investment and the involvement of governments means ROI must be found (or rather created) now, by any means necessary! Demand already is being forcefully created to justify these expenditures. Business models, regulations and policies/politics are pivoted in lockstep. Aside from all the conceptual, ethical and environmental issues of LLMs and their required infrastructure, these shifts are already also impacting chip/hardware production pipelines and start spelling the end of pe…

@arXiv_csDC_bot@mastoxiv.page
2025-08-12 11:01:03

Optimizing Federated Learning for Scalable Power-demand Forecasting in Microgrids
Roopkatha Banerjee, Sampath Koti, Gyanendra Singh, Anirban Chakraborty, Gurunath Gurrala, Bhushan Jagyasi, Yogesh Simmhan
arxiv.org/abs/2508.08022

@davej@dice.camp
2025-07-10 23:19:40

My great failure in life is that I never put myself in a position to monetise being this embarrassingly incompetent.
#auspol #australia #government

How will the changes work, and will they keep kids safe?

Search engines will have a suite of options to choose from for checking the ages of their Australian users.

There are seven main methods listed in the new regulations:

• Photo ID checks
• Face scanning age estimation tools
• Credit card checks
• Digital ID
• Vouching by the parent of a young person
• Using Al to guess a user's age based on the data the company already has
• Relying on a third party that has already checked the user's a…
Age-checking tech for social media ban mistakes kids for 37-year-olds

Children as young as 15 were repeatedly misidentified as being in their 20s and 30s during government tests of age-checking tools, sowing new doubts about whether the teen social media ban is viable.
He also warned the new rules for search engines could be circumvented using virtual private networks (VPNs).

"If the ambition of the government is to prevent children from accessing pornography, they're forgetting straight away the skills of these young people," he said.

Beyond concerns about the accuracy of age-assurance technology and the VPN workaround, the new search engine rules will still allow users to access adult content simply by not logging in.
@arXiv_statML_bot@mastoxiv.page
2025-08-08 08:48:22

High-Dimensional Differentially Private Quantile Regression: Distributed Estimation and Statistical Inference
Ziliang Shen, Caixing Wang, Shaoli Wang, Yibo Yan
arxiv.org/abs/2508.05212

@veit@mastodon.social
2025-08-09 08:02:52

A malicious Jira ticket can cause Cursor to exfiltrate secrets from the repository or local file system. But this is not just a problem with Cursor: GitHub MCP connections can also be exploited to expose private repository data, and a vulnerability in GitLab Duo allowed private information to be exposed through automatically rendered HTML code.

@arXiv_csMM_bot@mastoxiv.page
2025-08-12 08:50:23

Reversible Video Steganography Using Quick Response Codes and Modified ElGamal Cryptosystem
Ramadhan J. Mstafa
arxiv.org/abs/2508.07289 arx…

@arXiv_statME_bot@mastoxiv.page
2025-08-15 09:46:02

Bistochastically private release of longitudinal data
Nicolas Ruiz
arxiv.org/abs/2508.10606 arxiv.org/pdf/2508.10606

@arXiv_csIR_bot@mastoxiv.page
2025-08-11 08:33:20

AquiLLM: a RAG Tool for Capturing Tacit Knowledge in Research Groups
Chandler Campbell, Bernie Boscoe, Tuan Do
arxiv.org/abs/2508.05648 arx…

@tante@tldr.nettime.org
2025-05-27 07:40:05

This is just one example. "MCP" the protocol for "AI agents" is basically without security measures. It's like running random code on your infrastructure and data.
(Original title: GitHub MCP Exploited: Accessing private repositories via MCP)
simonwillison.net…

@arXiv_csCY_bot@mastoxiv.page
2025-07-10 08:41:41

Domestic frontier AI regulation, an IAEA for AI, an NPT for AI, and a US-led Allied Public-Private Partnership for AI: Four institutions for governing and developing frontier AI
Haydn Belfield
arxiv.org/abs/2507.06379

Trump administration is launching a new private health tracking system with Big Tech’s help
The system
-- spearheaded by an administration that has already freely shared highly personal data about Americans in ways that have tested legal bounds
-- could put patients’ desires for more convenience at their doctor’s office on a collision course with their expectations that their medical information be kept private.
“There are enormous ethical and legal concerns,”

@arXiv_csLG_bot@mastoxiv.page
2025-08-12 11:38:33

Membership and Memorization in LLM Knowledge Distillation
Ziqi Zhang, Ali Shahin Shamsabadi, Hanxiao Lu, Yifeng Cai, Hamed Haddadi
arxiv.org/abs/2508.07054

@arXiv_csCR_bot@mastoxiv.page
2025-08-15 07:33:22

A Robust Pipeline for Differentially Private Federated Learning on Imbalanced Clinical Data using SMOTETomek and FedProx
Rodrigo Tertulino
arxiv.org/abs/2508.10017

@servelan@newsie.social
2025-06-30 15:39:32

"DOGE secured the power to view records that contain competitors’ trade secrets, nonpublic details about government contracts, and sensitive regulatory actions or other information."
'Vulnerable': New alarm as Musk's 'God tier access' to damaging data revealed - Raw Story
rawstory.com/musk-private-data

@arXiv_csIT_bot@mastoxiv.page
2025-06-05 07:19:08

Differentially Private Distribution Release of Gaussian Mixture Models via KL-Divergence Minimization
Hang Liu, Anna Scaglione, Sean Peisert
arxiv.org/abs/2506.03467

@Techmeme@techhub.social
2025-06-30 17:10:41

A look at the rise of $5B unicorns, which make up 13% of ~1,600 unicorns but hold $3.5T of $6T total valuation; 17 startups joined the $5B club in H1 2025 (Gené Teare/Crunchbase News)
news.crunchbase.com/venture/ul

@frankel@mastodon.top
2025-06-21 08:11:04

The lethal trifecta for #AI agents: private data, untrusted content, and external communication
simonwillison.net/2025/Jun/16/

@gwire@mastodon.social
2025-07-22 22:36:27

Not really sure about the concerns related to data. I'm of the opinion that government should be creating and opening *more* documents and data to all data users, including all AI companies.
(Obviously I don't mean private and meaningfully confidential data. I mean reports, minutes, procedures, basically any non-confidential document obtainable via FOIA.)

@johnleonard@mastodon.social
2025-05-30 15:07:31

Data privacy experts are calling on the government to urgently tighten regulation around facial recognition technology, amid growing concerns over its unregulated use by police forces and private companies across the UK.
computing.co.uk/news/202…

@arXiv_csAI_bot@mastoxiv.page
2025-07-23 10:06:42

Identifying Pre-training Data in LLMs: A Neuron Activation-Based Detection Framework
Hongyi Tang, Zhihao Zhu, Yi Yang
arxiv.org/abs/2507.16414

@ErikJonker@mastodon.social
2025-07-21 16:35:20

Incredible if you think about it...
"The bug, when exploited, allows hackers to steal private digital keys from SharePoint servers without needing any credentials to log in. Once in, the hackers can remotely plant malware, and gain access to the files and data stored within"
Big #Microsoft

@arXiv_csAR_bot@mastoxiv.page
2025-06-02 07:15:29

Chameleon: A MatMul-Free Temporal Convolutional Network Accelerator for End-to-End Few-Shot and Continual Learning from Sequential Data
Douwe den Blanken, Charlotte Frenkel
arxiv.org/abs/2505.24852

@arXiv_csCL_bot@mastoxiv.page
2025-08-06 09:53:40

Current State in Privacy-Preserving Text Preprocessing for Domain-Agnostic NLP
Abhirup Sinha, Pritilata Saha, Tithi Saha
arxiv.org/abs/2508.03204

@arXiv_csCR_bot@mastoxiv.page
2025-06-13 07:22:30

Private Memorization Editing: Turning Memorization into a Defense to Strengthen Data Privacy in Large Language Models
Elena Sofia Ruzzetti, Giancarlo A. Xompero, Davide Venditti, Fabio Massimo Zanzotto
arxiv.org/abs/2506.10024

@arXiv_eessIV_bot@mastoxiv.page
2025-06-06 07:22:21

Gradient Inversion Attacks on Parameter-Efficient Fine-Tuning
Hasin Us Sami, Swapneel Sen, Amit K. Roy-Chowdhury, Srikanth V. Krishnamurthy, Basak Guler
arxiv.org/abs/2506.04453

@arXiv_csDS_bot@mastoxiv.page
2025-05-30 07:17:30

Differentially Private Space-Efficient Algorithms for Counting Distinct Elements in the Turnstile Model
Rachel Cummings, Alessandro Epasto, Jieming Mao, Tamalika Mukherjee, Tingting Ou, Peilin Zhong
arxiv.org/abs/2505.23682

@arXiv_csNI_bot@mastoxiv.page
2025-06-06 07:20:29

Indoor Sharing in the Mid-Band: A Performance Study of Neutral-Host, Cellular Macro, and Wi-Fi
Joshua Roy Palathinkal, Muhammad Iqbal Rochman, Vanlin Sathya, Mehmet Yavuz, Monisha Ghosh
arxiv.org/abs/2506.04974

@arXiv_csSE_bot@mastoxiv.page
2025-06-23 10:32:50

LLMs in Coding and their Impact on the Commercial Software Engineering Landscape
Vladislav Belozerov, Peter J Barclay, Askhan Sami
arxiv.org/abs/2506.16653

@arXiv_eessAS_bot@mastoxiv.page
2025-08-08 08:48:52

Privacy Disclosure of Similarity in Speech and Language Processing
Tom B\"ackstr\"om, Mohammad Hassan Vali, My Nguyen, Silas Rech
arxiv.org/abs/2508.05250

@arXiv_econGN_bot@mastoxiv.page
2025-06-10 08:27:22

The impact of extracurricular education on socioeconomic mobility in Japan: an application of causal machine learning
Yang Qiang
arxiv.org/abs/2506.07421

@arXiv_csSD_bot@mastoxiv.page
2025-06-02 09:59:58

This arxiv.org/abs/2406.15119 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSD_…

@arXiv_csLG_bot@mastoxiv.page
2025-07-04 10:15:11

Embedding-Based Federated Data Sharing via Differentially Private Conditional VAEs
Francesco Di Salvo, Hanh Huyen My Nguyen, Christian Ledig
arxiv.org/abs/2507.02671

@arXiv_csCR_bot@mastoxiv.page
2025-06-16 07:33:19

Differential Privacy in Machine Learning: From Symbolic AI to LLMs
Francisco Aguilera-Mart\'inez, Fernando Berzal
arxiv.org/abs/2506.11687

@Techmeme@techhub.social
2025-06-02 06:26:07

A deep dive into Apple TV's privacy features shows that Apple's streaming device is more private than the vast majority of alternatives, save for dumb TVs (Scharon Harding/Ars Technica)
arstechnica.com/gadgets/2025/0

@arXiv_csIR_bot@mastoxiv.page
2025-06-04 13:35:43

This arxiv.org/abs/2505.05031 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIR_…

@arXiv_csIT_bot@mastoxiv.page
2025-08-08 08:52:32

Necessity of Block Designs for Optimal Locally Private Distribution Estimation
Abigail Gentle
arxiv.org/abs/2508.05110 arxiv.org/pdf/2508.0…

@blaise@mastodon.cloud
2025-05-22 20:00:36

Whatever is left of our constitutional rights and liberties exists mostly in the gaps between what the government knows about us. This is the beginning of the end. Soon, when conservatives are in charge, they'll make us follow their stupid, puritanical rules whether we like it or not, and when liberals are in charge, well, they'll make us follow [i]their[/i] stupid, puritanical rules whether we like it or not.
1/2

@arXiv_csCR_bot@mastoxiv.page
2025-07-08 08:03:30

Aim High, Stay Private: Differentially Private Synthetic Data Enables Public Release of Behavioral Health Information with High Utility
Mohsen Ghasemizade, Juniper Lovato, Christopher M. Danforth, Peter Sheridan Dodds, Laura S. P. Bloomfield, Matthew Price, Team LEMURS, Joseph P. Near
arxiv.org/abs/2507.02971

@arXiv_physicscompph_bot@mastoxiv.page
2025-06-27 09:02:09

Benchmarking and Parallelization of Electrostatic Particle-In-Cell for low-temperature Plasma Simulation by particle-thread Binding
Libn Varghese, Bhaskar Chaudhury, Miral Shah, Mainak Bandyopadhyay
arxiv.org/abs/2506.21524

@arXiv_statML_bot@mastoxiv.page
2025-06-23 10:39:30

Latent Noise Injection for Private and Statistically Aligned Synthetic Data Generation
Rex Shen, Lu Tian
arxiv.org/abs/2506.16636

@arXiv_csCR_bot@mastoxiv.page
2025-08-15 09:12:32

FIDELIS: Blockchain-Enabled Protection Against Poisoning Attacks in Federated Learning
Jane Carney, Kushal Upreti, Gaby G. Dagher, Tim Andersen
arxiv.org/abs/2508.10042

@arXiv_csDS_bot@mastoxiv.page
2025-06-03 07:20:59

Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor
Maryam Aliakbarpour, Zhan Shi, Ria Stevens, Vincent X. Wang
arxiv.org/abs/2506.01162

@arXiv_csCY_bot@mastoxiv.page
2025-07-30 07:48:51

Dependency on Meta AI Chatbot in Messenger Among STEM and Non-STEM Students in Higher Education
Hilene E. Hernandez, Rhiziel P. Manalese, Roque Francis B. Dianelo, Jaymark A. Yambao, Almer B. Gamboa, Lloyd D. Feliciano, Mike Haizon M. David, Freneil R. Pampo, John Paul P. Miranda
arxiv.org/abs/2507.21059

@Techmeme@techhub.social
2025-07-31 15:35:54

Israel-based Noma Security, whose platform secures enterprise data and AI models against AI agents, raised a $100M Series B, bringing its total funding to $132M (Steven Scheer/Reuters)
reuters.com/world/middle-east/

@arXiv_csNI_bot@mastoxiv.page
2025-07-29 10:30:22

\textit{FedABC}: Attention-Based Client Selection for Federated Learning with Long-Term View
Wenxuan Ye, Xueli An, Junfan Wang, Xueqiang Yan, Georg Carle
arxiv.org/abs/2507.20871

@arXiv_csIT_bot@mastoxiv.page
2025-07-25 08:26:11

Minimax Data Sanitization with Distortion Constraint and Adversarial Inference
Amirarsalan Moatazedian, Yauhen Yakimenka, R\'emi A. Chou, J\"org Kliewer
arxiv.org/abs/2507.17942

@arXiv_csIR_bot@mastoxiv.page
2025-06-03 07:27:23

Adapting General-Purpose Embedding Models to Private Datasets Using Keyword-based Retrieval
Yubai Wei, Jiale Han, Yi Yang
arxiv.org/abs/2506.00363

@arXiv_csLG_bot@mastoxiv.page
2025-06-05 10:57:08

This arxiv.org/abs/2505.17226 has been replaced.
initial toot: mastoxiv.page/@arXiv_csLG_…

@arXiv_statME_bot@mastoxiv.page
2025-07-08 12:22:40

Blind Targeting: Personalization under Third-Party Privacy Constraints
Anya Shchetkina
arxiv.org/abs/2507.05175 arxiv…

@arXiv_csCR_bot@mastoxiv.page
2025-06-13 07:42:50

SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks
Kaiyuan Zhang, Siyuan Cheng, Hanxi Guo, Yuetian Chen, Zian Su, Shengwei An, Yuntao Du, Charles Fleming, Ashish Kundu, Xiangyu Zhang, Ninghui Li
arxiv.org/abs/2506.10424

@arXiv_statML_bot@mastoxiv.page
2025-07-29 09:42:31

Statistical Inference for Differentially Private Stochastic Gradient Descent
Xintao Xia, Linjun Zhang, Zhanrui Cai
arxiv.org/abs/2507.20560

@arXiv_csDS_bot@mastoxiv.page
2025-06-27 08:19:29

Practical and Accurate Local Edge Differentially Private Graph Algorithms
Pranay Mundra, Charalampos Papamanthou, Julian Shun, Quanquan C. Liu
arxiv.org/abs/2506.20828

@arXiv_csCR_bot@mastoxiv.page
2025-07-08 10:39:41

Accelerating Private Heavy Hitter Detection on Continual Observation Streams
Rayne Holland
arxiv.org/abs/2507.03361 a…

@arXiv_csCR_bot@mastoxiv.page
2025-06-09 08:11:53

Differentially Private Explanations for Clusters
Amir Gilad, Tova Milo, Kathy Razmadze, Ron Zadicario
arxiv.org/abs/2506.05900

@arXiv_csCR_bot@mastoxiv.page
2025-06-06 09:37:49

This arxiv.org/abs/2505.12612 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCR_…

@arXiv_csCY_bot@mastoxiv.page
2025-07-25 08:32:52

Countering Privacy Nihilism
Severin Engelmann, Helen Nissenbaum
arxiv.org/abs/2507.18253 arxiv.org/pdf/2507.18253

@arXiv_csCR_bot@mastoxiv.page
2025-06-06 07:16:21

Authenticated Private Set Intersection: A Merkle Tree-Based Approach for Enhancing Data Integrity
Zixian Gong, Zhiyong Zheng, Zhe Hu, Kun Tian, Yi Zhang, Zhedanov Oleksiy, Fengxia Liu
arxiv.org/abs/2506.04647

@arXiv_statME_bot@mastoxiv.page
2025-07-22 10:37:40

Robust and Differentially Private PCA for non-Gaussian data
Minwoo Kim, Sungkyu Jung
arxiv.org/abs/2507.15232 arxiv.o…

@arXiv_csCR_bot@mastoxiv.page
2025-08-07 09:42:14

DP-DocLDM: Differentially Private Document Image Generation using Latent Diffusion Models
Saifullah Saifullah, Stefan Agne, Andreas Dengel, Sheraz Ahmed
arxiv.org/abs/2508.04208

@arXiv_csCR_bot@mastoxiv.page
2025-07-31 09:16:51

Benchmarking Fraud Detectors on Private Graph Data
Alexander Goldberg, Giulia Fanti, Nihar Shah, Zhiwei Steven Wu
arxiv.org/abs/2507.22347

@arXiv_csCR_bot@mastoxiv.page
2025-07-01 10:42:53

Detect \& Score: Privacy-Preserving Misbehaviour Detection and Contribution Evaluation in Federated Learning
Marvin Xhemrishi, Alexandre Graell i Amat, Bal\'azs Pej\'o
arxiv.org/abs/2506.23583

@arXiv_csCR_bot@mastoxiv.page
2025-08-06 08:53:40

VFLAIR-LLM: A Comprehensive Framework and Benchmark for Split Learning of LLMs
Zixuan Gu, Qiufeng Fan, Long Sun, Yang Liu, Xiaojun Ye
arxiv.org/abs/2508.03097

@arXiv_csCR_bot@mastoxiv.page
2025-07-29 10:56:01

Testbed and Software Architecture for Enhancing Security in Industrial Private 5G Networks
Song Son Ha, Florian Foerster, Thomas Robert Doebbert, Tim Kittel, Dominik Merli, Gerd Scholl
arxiv.org/abs/2507.20873

@arXiv_csCR_bot@mastoxiv.page
2025-07-29 11:05:11

Development and analysis of a secured VoIP system for surveillance activities
M. Matsive Ali
arxiv.org/abs/2507.21038 arxiv.org/pdf/2507.21…

@arXiv_csCR_bot@mastoxiv.page
2025-07-01 11:06:03

Differentially Private Synthetic Data Release for Topics API Outputs
Travis Dick, Alessandro Epasto, Adel Javanmard, Josh Karlin, Andres Munoz Medina, Vahab Mirrokni, Sergei Vassilvitskii, Peilin Zhong
arxiv.org/abs/2506.23855

@arXiv_csCR_bot@mastoxiv.page
2025-06-02 07:16:46

DP-RTFL: Differentially Private Resilient Temporal Federated Learning for Trustworthy AI in Regulated Industries
Abhijit Talluri
arxiv.org/abs/2505.23813

@arXiv_csCR_bot@mastoxiv.page
2025-06-25 08:42:50

SoK: Can Synthetic Images Replace Real Data? A Survey of Utility and Privacy of Synthetic Image Generation
Yunsung Chung, Yunbei Zhang, Nassir Marrouche, Jihun Hamm
arxiv.org/abs/2506.19360

@arXiv_csCR_bot@mastoxiv.page
2025-06-27 09:20:49

Balancing Privacy and Utility in Correlated Data: A Study of Bayesian Differential Privacy
Martin Lange, Patricia Guerra-Balboa, Javier Parra-Arnau, Thorsten Strufe
arxiv.org/abs/2506.21308

@arXiv_csCR_bot@mastoxiv.page
2025-06-02 10:01:22

This arxiv.org/abs/2412.11369 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCR_…

@arXiv_csCR_bot@mastoxiv.page
2025-06-03 17:47:56

This arxiv.org/abs/2504.16449 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCR_…

@arXiv_csCR_bot@mastoxiv.page
2025-07-03 09:00:00

How to Securely Shuffle? A survey about Secure Shufflers for privacy-preserving computations
Marc Damie, Florian Hahn, Andreas Peter, Jan Ramon
arxiv.org/abs/2507.01487

@arXiv_csCR_bot@mastoxiv.page
2025-07-24 09:16:19

Threshold-Protected Searchable Sharing: Privacy Preserving Aggregated-ANN Search for Collaborative RAG
Ruoyang Rykie Guo
arxiv.org/abs/2507.17199

@arXiv_csCR_bot@mastoxiv.page
2025-06-02 09:55:13

This arxiv.org/abs/2311.16139 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCR_…

@arXiv_csCR_bot@mastoxiv.page
2025-06-26 09:42:10

Communication-Efficient Publication of Sparse Vectors under Differential Privacy
Quentin Hillebrand, Vorapong Suppakitpaisarn, Tetsuo Shibuya
arxiv.org/abs/2506.20234

@arXiv_csCR_bot@mastoxiv.page
2025-06-25 09:47:30

Machine Learning with Privacy for Protected Attributes
Saeed Mahloujifar, Chuan Guo, G. Edward Suh, Kamalika Chaudhuri
arxiv.org/abs/2506.19836