
2025-09-30 08:52:11
Responsible Diffusion: A Comprehensive Survey on Safety, Ethics, and Trust in Diffusion Models
Kang Wei, Xin Yuan, Fushuo Huo, Chuan Ma, Long Yuan, Songze Li, Ming Ding, Dacheng Tao
https://arxiv.org/abs/2509.22723
Responsible Diffusion: A Comprehensive Survey on Safety, Ethics, and Trust in Diffusion Models
Kang Wei, Xin Yuan, Fushuo Huo, Chuan Ma, Long Yuan, Songze Li, Ming Ding, Dacheng Tao
https://arxiv.org/abs/2509.22723
Validation of the MySurgeryRisk Algorithm for Predicting Complications and Death after Major Surgery: A Retrospective Multicenter Study Using OneFlorida Data Trust
Yuanfang Ren, Esra Adiyeke, Ziyuan Guan, Zhenhong Hu, Mackenzie J Meni, Benjamin Shickel, Parisa Rashidi, Tezcan Ozrazgat-Baslanti, Azra Bihorac
https://arxiv.org/abs…
Hey, I’m not a developer but I totally vibe-coded this game you have to install on your system and which I cannot confirm has no security holes and won’t exfiltrate your data because I’m still not a developer but trust me and install it ideally with root / admin privileges!
Data warehouses, lakes, lakehouses, and more – the choices we make profoundly impact operational costs and development speed. At Berlin Buzzwords, Lars Albertsson dived into how common operational challenges like deployment, failure handling, and data quality are affected by different data processing paradigms.
Watch the full session: https://youtu.be/uev_27z3-1s?si=tXTIQ8aiZeCp_Xa0
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: https://tickets.plainschwarz.com/bbuzz26/c/8Hvk0ZvJA/
"Blockchain-Enabled Zero Trust Framework for Securing FinTech Ecosystems Against Insider Threats and Cyber Attacks"
Avinash Singh, Vikas Pareek, Asish Sharma
https://arxiv.org/abs/2507.19976 …
Unveiling the Variability and Chemical Composition of AL Col
Surath C. Ghosh, Santosh Joshi, Samrat Ghosh, Athul Dileep, Otto Trust, Mrinmoy Sarkar, Jaime Andr\'es Rosales Guzm\'an, Nicol\'as Esteban Castro-Toledo, Oleg Malkov, Harinder P. Singh, Kefeng Tan, Sarabjeet S. Bedi
https://arxiv.org/abs/2508.20681
Trust and Human Autonomy after Cobot Failures: Communication is Key for Industry 5.0
Felix Glawe, Laura Kremer, Luisa Vervier, Philipp Brauner, Martina Ziefle
https://arxiv.org/abs/2509.22298
Breaking the Black Box: Inherently Interpretable Physics-Informed Machine Learning for Imbalanced Seismic Data
Vemula Sreenath, Filippo Gatti, Pierre Jehel
https://arxiv.org/abs/2508.19031
Trivial Trojans: How Minimal MCP Servers Enable Cross-Tool Exfiltration of Sensitive Data
Nicola Croce, Tobin South
https://arxiv.org/abs/2507.19880 https://
VISION: Robust and Interpretable Code Vulnerability Detection Leveraging Counterfactual Augmentation
David Egea, Barproda Halder, Sanghamitra Dutta
https://arxiv.org/abs/2508.18933
Weekend Reads
* Monitoring AS-SETs
https://blog.cloudflare.com/monitoring-as-sets-and-why-they-matter/
* IX LAN broadcast traffic
from my link log —
Cracking the Vault: flaws in authentication, identity, and authorization in HashiCorp Vault.
https://cyata.ai/blog/cracking-the-vault-how-we-found-zero-day-flaws-in-authe…
23andMe's Data Sold to Nonprofit Run by Its Co-Founder - 'And I Still Don't Trust It' - Slashdot
https://science.slashdot.org/story/25/07/19/0252236/23andmes-data-sold-to-nonprofit-run-by-its-co-founder---and-i-still-dont-trust-it
Weighted Levenberg-Marquardt methods for fitting multichannel nuclear cross section data
M. Imbri\v{s}ak, A. E. Lovell, M. R. Mumpower
https://arxiv.org/abs/2508.19468 https://
🎨 Perfect for #AI agents needing grounded web context and research tools demanding trust and freshness
🚀 Enables custom products where developers want complete control over how search data is used
📊 Structured response format eliminates need for complex data parsing and preprocessing steps
🌐
Trust and Reputation in Data Sharing: A Survey
Wenbo Wu, George Konstantinidis
https://arxiv.org/abs/2508.14028 https://arxiv.org/pdf/2508.14028
Machine learning-based multimodal prognostic models integrating pathology images and high-throughput omic data for overall survival prediction in cancer: a systematic review
Charlotte Jennings (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Andrew Broad (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Lucy Godson (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Emily…
Asteroseismology of HD 23734, HD 68703, and HD 73345 using K2-TESS Space-based Photometry and High-resolution Spectroscopy
Santosh Joshi, Athul Dileep, Eugene Semenko, Mrinmoy Sarkar, Otto Trust, Peter De Cat, Patricia Lampens, Marc-Antoine Dupret, Surath C. Ghosh, David Mkrtichian, Mathijs Vanrespaille, Sugyan Parida, Abhay Pratap Yadav, Pramod Kumar S., P. P. Goswami, Muhammed Riyas, Drisya Karinkuzhi
#Microsoft outsourced administration of classified #DoD data to cheap workers in #China. 🇨🇳 🕵️
My latest update on
FPF at PDP Week 2025: Generative AI, Digital Trust, and the Future of Cross-Border Data Transfers in APAC
https://fpf.org/blog/fpf-at-pdp-week-2025-generative-ai-digital-trust-and-the-future-of-cross-border-data-tran…
Trusted Data Fusion, Multi-Agent Autonomy, Autonomous Vehicles
R. Spencer Hallyburton, Miroslav Pajic
https://arxiv.org/abs/2507.17875 https://arxiv.org/pd…
An Efficient Recommendation Filtering-based Trust Model for Securing Internet of Things
Muhammad Ibn Ziauddin, Rownak Rahad Rabbi, SM Mehrab, Fardin Faiyaz, Mosarrat Jahan
https://arxiv.org/abs/2508.17304
TRUCE-AV: A Multimodal dataset for Trust and Comfort Estimation in Autonomous Vehicles
Aditi Bhalla, Christian Hellert, Enkelejda Kasneci, Nastassja Becker
https://arxiv.org/abs/2508.17880
NOSTRA: A noise-resilient and sparse data framework for trust region based multi objective Bayesian optimization
Maryam Ghasemzadeh, Anton van Beek
https://arxiv.org/abs/2508.16476
Swarm Oracle: Trustless Blockchain Agreements through Robot Swarms
Alexandre Pacheco, Hanqing Zhao, Volker Strobel, Tarik Roukny, Gregory Dudek, Andreagiovanni Reina, Marco Dorigo
https://arxiv.org/abs/2509.15956
Oh no it happened - client for a research project I’m working on got upset that we’re doing manual data analysis of survey responses, and complained about why we are so slow when their internal team working on a different report got “everything done in a couple of days with #AI tools”
And then they told us that waiting for proper human analysis is a “waste of time” and that we need to just chuck our dataset into AI and “get it over with”
I really don’t know what to do right now 🥲
Trying to do this properly on their expected timeline will mean very little sleep for multiple days, but giving up on the project quality and dumping it into AI is will make this entire project a waste of time. (As I wouldn’t be able to trust the output of the analysis, or be proud of it to showcase the final report as an example of our work, and not to mention that I don’t want to support this expectation to rush everything at work with these AI models)
Technical specification of a framework for the collection of clinical images and data
Alistair Mackenzie (Royal Surrey NHS Foundation Trust, Guildford, UK), Mark Halling-Brown (Royal Surrey NHS Foundation Trust, Guildford, UK), Ruben van Engen (Dutch Expert Centre for Screening), Carlijn Roozemond (Dutch Expert Centre for Screening), Lucy Warren (Royal Surrey NHS Foundation Trust, Guildford, UK), Dominic Ward (Royal Surrey NHS Foundation Trust, Guildford, UK), Nadia Smith (Royal Surrey…
Explainability Needs in Agriculture: Exploring Dairy Farmers' User Personas
Mengisti Berihu Girmay, Jakob Droste, Hannah Deters, Joerg Doerr
https://arxiv.org/abs/2509.16249
Check out today's Metacurity for the most critical infosec developments you might have missed over the weekend, including
--Chinese espionage campaign targeted House staffers ahead of trade talks,
--Ethical hackers uncover catastrophic flaws in restaurant chain's platforms,
--Salesloft Drift hack began last March,
--Customer data stolen in Wealthsimple breach,
--Trump to formally nominate Harman for NSA/Cybercom slot,
--Don't trust XChat's e…
AWS deleted my 10-year account and all data without warning
"On July 23, 2025, AWS deleted my 10-year-old account and every byte of data I had stored with them. No warning. No grace period. No recovery options. Just complete digital annihilation".
https://www.seuros.com/blog/aws…
Stack Overflow survey: 84% of developers use or plan to use AI tools in their workflow, up from 76% in 2024, and 33% trust AI accuracy, down from 43% in 2024 (Sean Michael Kerner/VentureBeat)
https://venturebeat.com/ai/stack-overflo…
AWS deleted my 10-year account and all data without warning
https://www.seuros.com/blog/aws-deleted-my-10-year-account-without-warning/
- The Architecture That Should Have Protected Me -
Small reminder that there is no 100% protection in the…
Measuring the Unmeasurable? Systematic Evidence on Scale Transformations in Subjective Survey Data
Caspar Kaiser, Anthony Lepinteur
https://arxiv.org/abs/2507.16440 https://
AWS deleted my 10-year account and all data without warning
https://www.seuros.com/blog/aws-deleted-my-10-year-account-without-warning/
Today, using #GAFAM software is no longer a matter of needs or preferences. Today, it is a choice in the domain of ethics.
If someone gives me their personal data, be it their contact data, image, or anything else, it is my solemn duty to keep that data secure. If I give it away to an app that uses it for marketing purposes, to train models, to manipulate people, or simply sells it, I fail that trust.
So don't give apps unnecessary permissions, or simply don't use such apps. And if the whole system abuses your data — well, if you can't change it, there are always notebooks and pens, you know.
#FreeSoftware
Implementing Zero Trust Architecture to Enhance Security and Resilience in the Pharmaceutical Supply Chain
Saeid Ghasemshirazi, Ghazaleh Shirvani, Marziye Ranjbar Tavakoli, Bahar Ghaedi, Mohammad Amin Langarizadeh
https://arxiv.org/abs/2508.15776
Auch der #owncloud Hosting Service https://owncloud.online wurde nun auch weggegeben. Gab wohl noch Kunden, die nicht auf die proprietäre
Biases in LLM-Generated Musical Taste Profiles for Recommendation
Bruno Sguerra, Elena V. Epure, Harin Lee, Manuel Moussallam
https://arxiv.org/abs/2507.16708
Towards a Real-Time Warning System for Detecting Inaccuracies in Photoplethysmography-Based Heart Rate Measurements in Wearable Devices
Rania Islmabouli, Marlene Brunner, Devender Kumar, Mahdi Sareban, Gunnar Treff, Michael Neudorfer, Josef Niebauer, Arne Bathke, Jan David Smeddinck
https://arxiv.org/abs/2508.19818
What would compel someone, with everything happening right now, to trust a tech company to host & protect your personal DNA or health data?
Ignorance? Sloth? Apathy? All 3?
From: @…
htt…
Benchmarking Offline Reinforcement Learning for Emotion-Adaptive Social Robotics
Soon Jynn Chu, Raju Gottumukkala, Alan Barhorst
https://arxiv.org/abs/2509.16858 https://…
Efficiently Verifiable Proofs of Data Attribution
Ari Karchmer, Seth Neel, Martin Pawelczyk
https://arxiv.org/abs/2508.10866 https://arxiv.org/pdf/2508.108…
Czech cyber agency warns against using services and products that send data to China https://therecord.media/czech-nukib-warns-against-products-sending-data-china
Can You Trust Your Copilot? A Privacy Scorecard for AI Coding Assistants
Amir AL-Maamari
https://arxiv.org/abs/2509.20388 https://arxiv.org/pdf/2509.20388
Monitoring Machine Learning Systems: A Multivocal Literature Review
Hira Naveed, Scott Barnett, Chetan Arora, John Grundy, Hourieh Khalajzadeh, Omar Haggag
https://arxiv.org/abs/2509.14294
Trust Semantics Distillation for Collaborator Selection via Memory-Augmented Agentic AI
Botao Zhu, Jeslyn Wang, Dusit Niyato, Xianbin Wang
https://arxiv.org/abs/2509.08151 https…
Very interesting looking paper: In tech we trust: A history of technophilia in the Intergovernmental Panel on Climate Change's (IPCC) climate mitigation expertise
https://www.sciencedirect.com/science/article/pii/S2214629625003615
Hopefully someone wil…
Minimum Data, Maximum Impact: 20 annotated samples for explainable lung nodule classification
Luisa Gall\'ee, Catharina Silvia Lisson, Christoph Gerhard Lisson, Daniela Drees, Felix Weig, Daniel Vogele, Meinrad Beer, Michael G\"otz
https://arxiv.org/abs/2508.00639
[Thread] an NYT standards editor responds to criticism of its recent Zohran Mamdani story, says the "ultimate source" was Columbia data that Mamdani confirmed (Patrick Healy/@patrickhealynyt)
https://x.com/patrickhealynyt/status/1941262786006483418
With Apache NiFi, a multimodal data pipelining tool, you can assemble existing and/or custom Java & Python processors into a variety of flows. At Berlin Buzzwords 2025, Lester Martin discussed how a rich data pipeline can be constructed from Kafka, stored using the Apache Iceberg table format and consumed from Trino.
Watch the full session: https://youtu.be/2yH9PfiXb9Y?si=Xhi_xDMzLQOps8my
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: https://tickets.plainschwarz.com/bbuzz26/c/8Hvk0ZvJA/
V-ZOR: Enabling Verifiable Cross-Blockchain Communication via Quantum-Driven ZKP Oracle Relays
M. Z. Haider, Tayyaba Noreen, M. Salman, M. Dias de Assuncao, Kaiwen Zhang
https://arxiv.org/abs/2509.10996
When (not) to trust Monte Carlo approximations for hierarchical Bayesian inference
Jack Heinzel, Salvatore Vitale
https://arxiv.org/abs/2509.07221 https://…
Mining Voter Behaviour and Confidence: A Rule-Based Analysis of the 2022 U.S. Elections
Md Al Jubair, Mohammad Shamsul Arefin, Ahmed Wasif Reza
https://arxiv.org/abs/2507.14236
Toward Edge General Intelligence with Multiple-Large Language Model (Multi-LLM): Architecture, Trust, and Orchestration
Haoxiang Luo, Yinqiu Liu, Ruichen Zhang, Jiacheng Wang, Gang Sun, Dusit Niyato, Hongfang Yu, Zehui Xiong, Xianbin Wang, Xuemin Shen
https://arxiv.org/abs/2507.00672
Fraud detection and risk assessment of online payment transactions on e-commerce platforms based on LLM and GCN frameworks
RuiHan Luo, Nanxi Wang, Xiaotong Zhu
https://arxiv.org/abs/2509.09928
Even de naam vsn een van de personages van Paulus de boskabouter opzoeken.
personages Paulus de boskabouter – Qwant
https://www.qwant.com/
@…
/2
Data-driven Trust Bootstrapping for Mobile Edge Computing-based Industrial IoT Services
Prabath Abeysekara, Hai Dong
https://arxiv.org/abs/2508.12560 https://
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
The Agent Behavior: Model, Governance and Challenges in the AI Digital Age
Qiang Zhang, Pei Yan, Yijia Xu, Chuanpo Fu, Yong Fang, Yang Liu
https://arxiv.org/abs/2508.14415 https…
Membrane: A Cryptographic Access Control System for Data Lakes
Sam Kumar, Samyukta Yagati, Conor Power, David E. Culler, Raluca Ada Popa
https://arxiv.org/abs/2509.08740 https:/…
Characterizing the Dynamics of Conspiracy Related German Telegram Conversations during COVID-19
Elisabeth H\"oldrich, Mathias Angermaier, Jana Lasser, Joao Pinheiro-Neto
https://arxiv.org/abs/2507.13398
What Shapes User Trust in ChatGPT? A Mixed-Methods Study of User Attributes, Trust Dimensions, Task Context, and Societal Perceptions among University Students
Kadija Bouyzourn, Alexandra Birch
https://arxiv.org/abs/2507.05046
The Architecture of Trust: A Framework for AI-Augmented Real Estate Valuation in the Era of Structured Data
Petteri Teikari, Mike Jarrell, Maryam Azh, Harri Pesola
https://arxiv.org/abs/2508.02765
Fully Decentralised Consensus for Extreme-scale Blockchain
Siamak Abdi, Giuseppe Di Fatta, Atta Badii, Giancarlo Fortino
https://arxiv.org/abs/2508.02595 https://
A Global South Strategy for Evaluating Research Value with ChatGPT
Robin Nunkoo, Mike Thelwall
https://arxiv.org/abs/2508.01882 https://arxiv.org/pdf/2508.…
Aspect-Oriented Programming in Secure Software Development: A Case Study of Security Aspects in Web Applications
Mterorga Ukor
https://arxiv.org/abs/2509.07449 https://
HARMONIC: A Content-Centric Cognitive Robotic Architecture
Sanjay Oruganti, Sergei Nirenburg, Marjorie McShane, Jesse English, Michael K. Roberts, Christian Arndt, Carlos Gonzalez, Mingyo Seo, Luis Sentis
https://arxiv.org/abs/2509.13279
Can I Trust This Chatbot? Assessing User Privacy in AI-Healthcare Chatbot Applications
Ramazan Yener, Guan-Hung Chen, Ece Gumusel, Masooda Bashir
https://arxiv.org/abs/2509.14581
News Sentiment Embeddings for Stock Price Forecasting
Ayaan Qayyum
https://arxiv.org/abs/2507.01970 https://arxiv.org/pdf/2507.01970
Climate Policy Radar is developing an open-source knowledge graph for climate policy. At Berlin Buzzwords, Harrison Pim and Fred O'Loughlin discussed how they combine in-house expertise with a scalable data infrastructure in order to identify key concepts within thousands of global climate policy documents.
Watch the full session: https://youtu.be/H6BhF6zSvp4?si=Jzioutp8n__2XY2c
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: https://tickets.plainschwarz.com/bbuzz26/c/8Hvk0ZvJA/
Developing a Responsible AI Framework for Healthcare in Low Resource Countries: A Case Study in Nepal and Ghana
Hari Krishna Neupane, Bhupesh Kumar Mishra
https://arxiv.org/abs/2508.12389
A Curriculum Learning Approach to Reinforcement Learning: Leveraging RAG for Multimodal Question Answering
Chenliang Zhang, Lin Wang, Yuanyuan Lu, Yusheng Qi, Kexin Wang, Peixu Hou, Wenshi Chen
https://arxiv.org/abs/2508.10337
Beneath the Mask: Can Contribution Data Unveil Malicious Personas in Open-Source Projects?
Ruby Nealon
https://arxiv.org/abs/2508.13453 https://arxiv.org/p…
Improving Real-Time Concept Drift Detection using a Hybrid Transformer-Autoencoder Framework
N Harshit, K Mounvik
https://arxiv.org/abs/2508.07085 https://…
At Berlin Buzzwords 2025, Viola Rädle & Raphael Franke presented gamma_flow, an open-source Python package for real-time spectral data analysis.
Watch the full session: https://youtu.be/cDCtdStMWuc?si=FyrOSJ8UIjmeQfLl
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: https://tickets.plainschwarz.com/bbuzz26/c/8Hvk0ZvJA/
Effect of AI Performance, Risk Perception, and Trust on Human Dependence in Deepfake Detection AI system
Yingfan Zhou, Ester Chen, Manasa Pisipati, Aiping Xiong, Sarah Rajtmajer
https://arxiv.org/abs/2508.01906
Do Role-Playing Agents Practice What They Preach? Belief-Behavior Consistency in LLM-Based Simulations of Human Trust
Amogh Mannekote, Adam Davies, Guohao Li, Kristy Elizabeth Boyer, ChengXiang Zhai, Bonnie J Dorr, Francesco Pinto
https://arxiv.org/abs/2507.02197
FIDELIS: Blockchain-Enabled Protection Against Poisoning Attacks in Federated Learning
Jane Carney, Kushal Upreti, Gaby G. Dagher, Tim Andersen
https://arxiv.org/abs/2508.10042 …
LLMs are now part of our daily work, making coding easier. At Berlin Buzzwords 2025, Ivan Dolgov discussed how they built an in-house LLM for AI code completion in JetBrains products, covering design choices, data preparation, training and model evaluation.
Watch the full session: https://youtu.be/yLHCwi_mgvQ?si=CpW8bGd9jWHq43Ez
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: https://tickets.plainschwarz.com/bbuzz26/c/8Hvk0ZvJA/
Invisible Watermarks, Visible Gains: Steering Machine Unlearning with Bi-Level Watermarking Design
Yuhao Sun, Yihua Zhang, Gaowen Liu, Hongtao Xie, Sijia Liu
https://arxiv.org/abs/2508.10065
Rethinking Broken Object Level Authorization Attacks Under Zero Trust Principle
Anbin Wu (The College of Intelligence and Computing, Tianjin University), Zhiyong Feng (The College of Intelligence and Computing, Tianjin University), Ruitao Feng (The Southern Cross University)
https://arxiv.org/abs/2507.02309
Threat Modeling for Enhancing Security of IoT Audio Classification Devices under a Secure Protocols Framework
Sergio Benlloch-Lopez, Miquel Viel-Vazquez, Javier Naranjo-Alcazar, Jordi Grau-Haro, Pedro Zuccarello
https://arxiv.org/abs/2509.14657
Attestable Audits: Verifiable AI Safety Benchmarks Using Trusted Execution Environments
Christoph Schnabl, Daniel Hugenroth, Bill Marino, Alastair R. Beresford
https://arxiv.org/abs/2506.23706
Innovating Augmented Reality Security: Recent E2E Encryption Approaches
Hamish Alsop, Leandros Maglaras, Helge Janicke, Iqbal H. Sarker, Mohamed Amine Ferrag
https://arxiv.org/abs/2509.10313
NVIDIA GPU Confidential Computing Demystified
Zhongshu Gu, Enriquillo Valdez, Salman Ahmed, Julian James Stephen, Michael Le, Hani Jamjoom, Shixuan Zhao, Zhiqiang Lin
https://arxiv.org/abs/2507.02770
Large Language Model-Based Framework for Explainable Cyberattack Detection in Automatic Generation Control Systems
Muhammad Sharshar, Ahmad Mohammad Saber, Davor Svetinovic, Amr M. Youssef, Deepa Kundur, Ehab F. El-Saadany
https://arxiv.org/abs/2507.22239