
2025-08-15 10:21:22
Efficiently Verifiable Proofs of Data Attribution
Ari Karchmer, Seth Neel, Martin Pawelczyk
https://arxiv.org/abs/2508.10866 https://arxiv.org/pdf/2508.108…
Efficiently Verifiable Proofs of Data Attribution
Ari Karchmer, Seth Neel, Martin Pawelczyk
https://arxiv.org/abs/2508.10866 https://arxiv.org/pdf/2508.108…
What would compel someone, with everything happening right now, to trust a tech company to host & protect your personal DNA or health data?
Ignorance? Sloth? Apathy? All 3?
From: @…
htt…
FIDELIS: Blockchain-Enabled Protection Against Poisoning Attacks in Federated Learning
Jane Carney, Kushal Upreti, Gaby G. Dagher, Tim Andersen
https://arxiv.org/abs/2508.10042 …
A Curriculum Learning Approach to Reinforcement Learning: Leveraging RAG for Multimodal Question Answering
Chenliang Zhang, Lin Wang, Yuanyuan Lu, Yusheng Qi, Kexin Wang, Peixu Hou, Wenshi Chen
https://arxiv.org/abs/2508.10337
FPF at PDP Week 2025: Generative AI, Digital Trust, and the Future of Cross-Border Data Transfers in APAC
https://fpf.org/blog/fpf-at-pdp-week-2025-generative-ai-digital-trust-and-the-future-of-cross-border-data-tran…
Auch der #owncloud Hosting Service https://owncloud.online wurde nun auch weggegeben. Gab wohl noch Kunden, die nicht auf die proprietäre
Invisible Watermarks, Visible Gains: Steering Machine Unlearning with Bi-Level Watermarking Design
Yuhao Sun, Yihua Zhang, Gaowen Liu, Hongtao Xie, Sijia Liu
https://arxiv.org/abs/2508.10065
What Shapes User Trust in ChatGPT? A Mixed-Methods Study of User Attributes, Trust Dimensions, Task Context, and Societal Perceptions among University Students
Kadija Bouyzourn, Alexandra Birch
https://arxiv.org/abs/2507.05046
Technical specification of a framework for the collection of clinical images and data
Alistair Mackenzie (Royal Surrey NHS Foundation Trust, Guildford, UK), Mark Halling-Brown (Royal Surrey NHS Foundation Trust, Guildford, UK), Ruben van Engen (Dutch Expert Centre for Screening), Carlijn Roozemond (Dutch Expert Centre for Screening), Lucy Warren (Royal Surrey NHS Foundation Trust, Guildford, UK), Dominic Ward (Royal Surrey NHS Foundation Trust, Guildford, UK), Nadia Smith (Royal Surrey…
Check out today's Metacurity for the most critical infosec developments you might have missed over the weekend, including
--Chinese espionage campaign targeted House staffers ahead of trade talks,
--Ethical hackers uncover catastrophic flaws in restaurant chain's platforms,
--Salesloft Drift hack began last March,
--Customer data stolen in Wealthsimple breach,
--Trump to formally nominate Harman for NSA/Cybercom slot,
--Don't trust XChat's e…
»Life360 Secretly Sells Users’ Geolocation Data to Third Parties, Class Action Claims:
A proposed class action alleges family tracking app Life360 secretly sells data about users’ locations and movements to third parties.«
When apps are (almost) free, they make unlimited money by selling you. Don't trust any, if they don't communicate their data protection.
🕵️
Membrane: A Cryptographic Access Control System for Data Lakes
Sam Kumar, Samyukta Yagati, Conor Power, David E. Culler, Raluca Ada Popa
https://arxiv.org/abs/2509.08740 https:/…
AWS deleted my 10-year account and all data without warning
"On July 23, 2025, AWS deleted my 10-year-old account and every byte of data I had stored with them. No warning. No grace period. No recovery options. Just complete digital annihilation".
https://www.seuros.com/blog/aws…
Trust Semantics Distillation for Collaborator Selection via Memory-Augmented Agentic AI
Botao Zhu, Jeslyn Wang, Dusit Niyato, Xianbin Wang
https://arxiv.org/abs/2509.08151 https…
AWS deleted my 10-year account and all data without warning
https://www.seuros.com/blog/aws-deleted-my-10-year-account-without-warning/
- The Architecture That Should Have Protected Me -
Small reminder that there is no 100% protection in the…
Stack Overflow survey: 84% of developers use or plan to use AI tools in their workflow, up from 76% in 2024, and 33% trust AI accuracy, down from 43% in 2024 (Sean Michael Kerner/VentureBeat)
https://venturebeat.com/ai/stack-overflo…
AWS deleted my 10-year account and all data without warning
https://www.seuros.com/blog/aws-deleted-my-10-year-account-without-warning/
The Architecture of Trust: A Framework for AI-Augmented Real Estate Valuation in the Era of Structured Data
Petteri Teikari, Mike Jarrell, Maryam Azh, Harri Pesola
https://arxiv.org/abs/2508.02765
Innovating Augmented Reality Security: Recent E2E Encryption Approaches
Hamish Alsop, Leandros Maglaras, Helge Janicke, Iqbal H. Sarker, Mohamed Amine Ferrag
https://arxiv.org/abs/2509.10313
Today, using #GAFAM software is no longer a matter of needs or preferences. Today, it is a choice in the domain of ethics.
If someone gives me their personal data, be it their contact data, image, or anything else, it is my solemn duty to keep that data secure. If I give it away to an app that uses it for marketing purposes, to train models, to manipulate people, or simply sells it, I fail that trust.
So don't give apps unnecessary permissions, or simply don't use such apps. And if the whole system abuses your data — well, if you can't change it, there are always notebooks and pens, you know.
#FreeSoftware
When (not) to trust Monte Carlo approximations for hierarchical Bayesian inference
Jack Heinzel, Salvatore Vitale
https://arxiv.org/abs/2509.07221 https://…
Czech cyber agency warns against using services and products that send data to China https://therecord.media/czech-nukib-warns-against-products-sending-data-china
Climate Policy Radar is developing an open-source knowledge graph for climate policy. At Berlin Buzzwords, Harrison Pim and Fred O'Loughlin discussed how they combine in-house expertise with a scalable data infrastructure in order to identify key concepts within thousands of global climate policy documents.
Watch the full session: https://youtu.be/H6BhF6zSvp4?si=Jzioutp8n__2XY2c
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: https://tickets.plainschwarz.com/bbuzz26/c/8Hvk0ZvJA/
Very interesting looking paper: In tech we trust: A history of technophilia in the Intergovernmental Panel on Climate Change's (IPCC) climate mitigation expertise
https://www.sciencedirect.com/science/article/pii/S2214629625003615
Hopefully someone wil…
Aspect-Oriented Programming in Secure Software Development: A Case Study of Security Aspects in Web Applications
Mterorga Ukor
https://arxiv.org/abs/2509.07449 https://
Improving Real-Time Concept Drift Detection using a Hybrid Transformer-Autoencoder Framework
N Harshit, K Mounvik
https://arxiv.org/abs/2508.07085 https://…
23andMe's Data Sold to Nonprofit Run by Its Co-Founder - 'And I Still Don't Trust It' - Slashdot
https://science.slashdot.org/story/25/07/19/0252236/23andmes-data-sold-to-nonprofit-run-by-its-co-founder---and-i-still-dont-trust-it
Minimum Data, Maximum Impact: 20 annotated samples for explainable lung nodule classification
Luisa Gall\'ee, Catharina Silvia Lisson, Christoph Gerhard Lisson, Daniela Drees, Felix Weig, Daniel Vogele, Meinrad Beer, Michael G\"otz
https://arxiv.org/abs/2508.00639
[Thread] an NYT standards editor responds to criticism of its recent Zohran Mamdani story, says the "ultimate source" was Columbia data that Mamdani confirmed (Patrick Healy/@patrickhealynyt)
https://x.com/patrickhealynyt/status/1941262786006483418
Trust and Reputation in Data Sharing: A Survey
Wenbo Wu, George Konstantinidis
https://arxiv.org/abs/2508.14028 https://arxiv.org/pdf/2508.14028
from my link log —
Cracking the Vault: flaws in authentication, identity, and authorization in HashiCorp Vault.
https://cyata.ai/blog/cracking-the-vault-how-we-found-zero-day-flaws-in-authe…
Effect of AI Performance, Risk Perception, and Trust on Human Dependence in Deepfake Detection AI system
Yingfan Zhou, Ester Chen, Manasa Pisipati, Aiping Xiong, Sarah Rajtmajer
https://arxiv.org/abs/2508.01906
Dorothea Baur reflecting on #AI tech bros going all in on even your most personal data. Claiming to help you solve a problem which they helped create in the beginning:
"A breach of trust enabled by AI now becomes the justification for surveillance-based trust systems. And the very people who helped break the system are offering to fix it – in exchange for your iris. That’s not a safety fe…
Toward Edge General Intelligence with Multiple-Large Language Model (Multi-LLM): Architecture, Trust, and Orchestration
Haoxiang Luo, Yinqiu Liu, Ruichen Zhang, Jiacheng Wang, Gang Sun, Dusit Niyato, Hongfang Yu, Zehui Xiong, Xianbin Wang, Xuemin Shen
https://arxiv.org/abs/2507.00672
Machine learning-based multimodal prognostic models integrating pathology images and high-throughput omic data for overall survival prediction in cancer: a systematic review
Charlotte Jennings (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Andrew Broad (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Lucy Godson (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Emily…
Scale AI emphasizes that it remains an independent company and says Meta will not have access to Scale's internal systems or customers' confidential information (Scale AI)
https://scale.com/blog/customer-trust-scale-meta-deal
Even de naam vsn een van de personages van Paulus de boskabouter opzoeken.
personages Paulus de boskabouter – Qwant
https://www.qwant.com/
@…
/2
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
At Berlin Buzzwords 2025, Viola Rädle & Raphael Franke presented gamma_flow, an open-source Python package for real-time spectral data analysis.
Watch the full session: https://youtu.be/cDCtdStMWuc?si=FyrOSJ8UIjmeQfLl
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: https://tickets.plainschwarz.com/bbuzz26/c/8Hvk0ZvJA/
Fully Decentralised Consensus for Extreme-scale Blockchain
Siamak Abdi, Giuseppe Di Fatta, Atta Badii, Giancarlo Fortino
https://arxiv.org/abs/2508.02595 https://
Trusted Data Fusion, Multi-Agent Autonomy, Autonomous Vehicles
R. Spencer Hallyburton, Miroslav Pajic
https://arxiv.org/abs/2507.17875 https://arxiv.org/pd…
Oh no it happened - client for a research project I’m working on got upset that we’re doing manual data analysis of survey responses, and complained about why we are so slow when their internal team working on a different report got “everything done in a couple of days with #AI tools”
And then they told us that waiting for proper human analysis is a “waste of time” and that we need to just chuck our dataset into AI and “get it over with”
I really don’t know what to do right now 🥲
Trying to do this properly on their expected timeline will mean very little sleep for multiple days, but giving up on the project quality and dumping it into AI is will make this entire project a waste of time. (As I wouldn’t be able to trust the output of the analysis, or be proud of it to showcase the final report as an example of our work, and not to mention that I don’t want to support this expectation to rush everything at work with these AI models)
A Global South Strategy for Evaluating Research Value with ChatGPT
Robin Nunkoo, Mike Thelwall
https://arxiv.org/abs/2508.01882 https://arxiv.org/pdf/2508.…
#Microsoft outsourced administration of classified #DoD data to cheap workers in #China. 🇨🇳 🕵️
My latest update on
The Synthetic Mirror -- Synthetic Data at the Age of Agentic AI
Marcelle Momha
https://arxiv.org/abs/2506.13818 https://arxiv.org/pdf…
Folks are getting a wee bit more concerned about their privacy now that Donald Trump is in charge of the US.
You may have noticed that he and his regime love getting their hands on other people's data.
Privacy isn't the only issue. -- Can you trust Microsoft to deliver on its service promises under American political pressure?
Ask the EU-based International Criminal Court (ICC) which after it issued arrest warrants for Israeli Prime Minister Benjamin Netanyahu fo…
TRUCE-AV: A Multimodal dataset for Trust and Comfort Estimation in Autonomous Vehicles
Aditi Bhalla, Christian Hellert, Enkelejda Kasneci, Nastassja Becker
https://arxiv.org/abs/2508.17880
Do Role-Playing Agents Practice What They Preach? Belief-Behavior Consistency in LLM-Based Simulations of Human Trust
Amogh Mannekote, Adam Davies, Guohao Li, Kristy Elizabeth Boyer, ChengXiang Zhai, Bonnie J Dorr, Francesco Pinto
https://arxiv.org/abs/2507.02197
News Sentiment Embeddings for Stock Price Forecasting
Ayaan Qayyum
https://arxiv.org/abs/2507.01970 https://arxiv.org/pdf/2507.01970
Weighted Levenberg-Marquardt methods for fitting multichannel nuclear cross section data
M. Imbri\v{s}ak, A. E. Lovell, M. R. Mumpower
https://arxiv.org/abs/2508.19468 https://
Hey, I’m not a developer but I totally vibe-coded this game you have to install on your system and which I cannot confirm has no security holes and won’t exfiltrate your data because I’m still not a developer but trust me and install it ideally with root / admin privileges!
An Efficient Recommendation Filtering-based Trust Model for Securing Internet of Things
Muhammad Ibn Ziauddin, Rownak Rahad Rabbi, SM Mehrab, Fardin Faiyaz, Mosarrat Jahan
https://arxiv.org/abs/2508.17304
Data warehouses, lakes, lakehouses, and more – the choices we make profoundly impact operational costs and development speed. At Berlin Buzzwords, Lars Albertsson dived into how common operational challenges like deployment, failure handling, and data quality are affected by different data processing paradigms.
Watch the full session: https://youtu.be/uev_27z3-1s?si=tXTIQ8aiZeCp_Xa0
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: https://tickets.plainschwarz.com/bbuzz26/c/8Hvk0ZvJA/
Unveiling the Variability and Chemical Composition of AL Col
Surath C. Ghosh, Santosh Joshi, Samrat Ghosh, Athul Dileep, Otto Trust, Mrinmoy Sarkar, Jaime Andr\'es Rosales Guzm\'an, Nicol\'as Esteban Castro-Toledo, Oleg Malkov, Harinder P. Singh, Kefeng Tan, Sarabjeet S. Bedi
https://arxiv.org/abs/2508.20681
Measuring the Unmeasurable? Systematic Evidence on Scale Transformations in Subjective Survey Data
Caspar Kaiser, Anthony Lepinteur
https://arxiv.org/abs/2507.16440 https://
Had ik jullie al verteld dat Quant best regelmatig met weinig resultaten al gelijk uitstekende resultaten voorschotelt?
Qwant - De [Franse] zoekmachine met respect voor uw privacy
#Quant
NOSTRA: A noise-resilient and sparse data framework for trust region based multi objective Bayesian optimization
Maryam Ghasemzadeh, Anton van Beek
https://arxiv.org/abs/2508.16476
Robust Training with Data Augmentation for Medical Imaging Classification
Josu\'e Mart\'inez-Mart\'inez, Olivia Brown, Mostafa Karami, Sheida Nabavi
https://arxiv.org/abs/2506.17133
Rethinking Broken Object Level Authorization Attacks Under Zero Trust Principle
Anbin Wu (The College of Intelligence and Computing, Tianjin University), Zhiyong Feng (The College of Intelligence and Computing, Tianjin University), Ruitao Feng (The Southern Cross University)
https://arxiv.org/abs/2507.02309
HARPT: A Corpus for Analyzing Consumers' Trust and Privacy Concerns in Mobile Health Apps
Timoteo Kelly, Abdulkadir Korkmaz, Samuel Mallet, Connor Souders, Sadra Aliakbarpour, Praveen Rao
https://arxiv.org/abs/2506.19268
Biases in LLM-Generated Musical Taste Profiles for Recommendation
Bruno Sguerra, Elena V. Epure, Harin Lee, Manuel Moussallam
https://arxiv.org/abs/2507.16708
Breaking the Black Box: Inherently Interpretable Physics-Informed Machine Learning for Imbalanced Seismic Data
Vemula Sreenath, Filippo Gatti, Pierre Jehel
https://arxiv.org/abs/2508.19031
Understanding and Benchmarking the Trustworthiness in Multimodal LLMs for Video Understanding
Youze Wang, Zijun Chen, Ruoyu Chen, Shishen Gu, Yinpeng Dong, Hang Su, Jun Zhu, Meng Wang, Richang Hong, Wenbo Hu
https://arxiv.org/abs/2506.12336
Validation of the MySurgeryRisk Algorithm for Predicting Complications and Death after Major Surgery: A Retrospective Multicenter Study Using OneFlorida Data Trust
Yuanfang Ren, Esra Adiyeke, Ziyuan Guan, Zhenhong Hu, Mackenzie J Meni, Benjamin Shickel, Parisa Rashidi, Tezcan Ozrazgat-Baslanti, Azra Bihorac
https://arxiv.org/abs…
LLMs are now part of our daily work, making coding easier. At Berlin Buzzwords 2025, Ivan Dolgov discussed how they built an in-house LLM for AI code completion in JetBrains products, covering design choices, data preparation, training and model evaluation.
Watch the full session: https://youtu.be/yLHCwi_mgvQ?si=CpW8bGd9jWHq43Ez
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: https://tickets.plainschwarz.com/bbuzz26/c/8Hvk0ZvJA/
Asteroseismology of HD 23734, HD 68703, and HD 73345 using K2-TESS Space-based Photometry and High-resolution Spectroscopy
Santosh Joshi, Athul Dileep, Eugene Semenko, Mrinmoy Sarkar, Otto Trust, Peter De Cat, Patricia Lampens, Marc-Antoine Dupret, Surath C. Ghosh, David Mkrtichian, Mathijs Vanrespaille, Sugyan Parida, Abhay Pratap Yadav, Pramod Kumar S., P. P. Goswami, Muhammed Riyas, Drisya Karinkuzhi
VISION: Robust and Interpretable Code Vulnerability Detection Leveraging Counterfactual Augmentation
David Egea, Barproda Halder, Sanghamitra Dutta
https://arxiv.org/abs/2508.18933
Implementing Zero Trust Architecture to Enhance Security and Resilience in the Pharmaceutical Supply Chain
Saeid Ghasemshirazi, Ghazaleh Shirvani, Marziye Ranjbar Tavakoli, Bahar Ghaedi, Mohammad Amin Langarizadeh
https://arxiv.org/abs/2508.15776
Towards an Approach for Evaluating the Impact of AI Standards
Julia Lane
https://arxiv.org/abs/2506.13839 https://arxiv.org/pdf/2506.…
Data-driven Trust Bootstrapping for Mobile Edge Computing-based Industrial IoT Services
Prabath Abeysekara, Hai Dong
https://arxiv.org/abs/2508.12560 https://
Attestable Audits: Verifiable AI Safety Benchmarks Using Trusted Execution Environments
Christoph Schnabl, Daniel Hugenroth, Bill Marino, Alastair R. Beresford
https://arxiv.org/abs/2506.23706
"Blockchain-Enabled Zero Trust Framework for Securing FinTech Ecosystems Against Insider Threats and Cyber Attacks"
Avinash Singh, Vikas Pareek, Asish Sharma
https://arxiv.org/abs/2507.19976 …
Characterizing the Dynamics of Conspiracy Related German Telegram Conversations during COVID-19
Elisabeth H\"oldrich, Mathias Angermaier, Jana Lasser, Joao Pinheiro-Neto
https://arxiv.org/abs/2507.13398
Trivial Trojans: How Minimal MCP Servers Enable Cross-Tool Exfiltration of Sensitive Data
Nicola Croce, Tobin South
https://arxiv.org/abs/2507.19880 https://
Social Media Can Reduce Misinformation When Public Scrutiny is High
Gavin Wang, Haofei Qin, Xiao Tang, Lynn Wu
https://arxiv.org/abs/2506.16355 https://
Patient-Centred Explainability in IVF Outcome Prediction
Adarsa Sivaprasad, Ehud Reiter, David McLernon, Nava Tintarev, Siladitya Bhattacharya, Nir Oren
https://arxiv.org/abs/2506.18760
Public Perceptions of Autonomous Vehicles: A Survey of Pedestrians and Cyclists in Pittsburgh
Rudra Y. Bedekar
https://arxiv.org/abs/2506.17513 https://
NVIDIA GPU Confidential Computing Demystified
Zhongshu Gu, Enriquillo Valdez, Salman Ahmed, Julian James Stephen, Michael Le, Hani Jamjoom, Shixuan Zhao, Zhiqiang Lin
https://arxiv.org/abs/2507.02770
Mining Voter Behaviour and Confidence: A Rule-Based Analysis of the 2022 U.S. Elections
Md Al Jubair, Mohammad Shamsul Arefin, Ahmed Wasif Reza
https://arxiv.org/abs/2507.14236
The Agent Behavior: Model, Governance and Challenges in the AI Digital Age
Qiang Zhang, Pei Yan, Yijia Xu, Chuanpo Fu, Yong Fang, Yang Liu
https://arxiv.org/abs/2508.14415 https…
Towards a Real-Time Warning System for Detecting Inaccuracies in Photoplethysmography-Based Heart Rate Measurements in Wearable Devices
Rania Islmabouli, Marlene Brunner, Devender Kumar, Mahdi Sareban, Gunnar Treff, Michael Neudorfer, Josef Niebauer, Arne Bathke, Jan David Smeddinck
https://arxiv.org/abs/2508.19818
Beneath the Mask: Can Contribution Data Unveil Malicious Personas in Open-Source Projects?
Ruby Nealon
https://arxiv.org/abs/2508.13453 https://arxiv.org/p…
Developing a Responsible AI Framework for Healthcare in Low Resource Countries: A Case Study in Nepal and Ghana
Hari Krishna Neupane, Bhupesh Kumar Mishra
https://arxiv.org/abs/2508.12389
Large Language Model-Based Framework for Explainable Cyberattack Detection in Automatic Generation Control Systems
Muhammad Sharshar, Ahmad Mohammad Saber, Davor Svetinovic, Amr M. Youssef, Deepa Kundur, Ehab F. El-Saadany
https://arxiv.org/abs/2507.22239
Autonomous Cyber Resilience via a Co-Evolutionary Arms Race within a Fortified Digital Twin Sandbox
Malikussaid, Sutiyo
https://arxiv.org/abs/2506.20102 h…