
2025-06-05 07:16:39
Beware! The AI Act Can Also Apply to Your AI Research Practices
Alina Wernick, Kristof Meding
https://arxiv.org/abs/2506.03218 https://
Beware! The AI Act Can Also Apply to Your AI Research Practices
Alina Wernick, Kristof Meding
https://arxiv.org/abs/2506.03218 https://
Analysis – When your LLM calls the cops: Claude 4’s whistle-blow and the new agentic AI risk stack
🤖 https://venturebeat.com/ai/when-your-llm-calls-the-cops-claude-4s-whistle-blow-and-the-new-agentic-ai-risk-stack/…
AI coding startups are at risk of being disrupted by Google, Microsoft, and OpenAI; source: Microsoft's GitHub Copilot grew to over $500M in revenue last year (Reuters)
https://www.reuters.com/business/ai-vibe-codi…
Developing a Risk Identification Framework for Foundation Model Uses
David Piorkowski, Michael Hind, John Richards, Jacquelyn Martino
https://arxiv.org/abs/2506.02066
AI Risk-Management Standards Profile for General-Purpose AI (GPAI) and Foundation Models
Anthony M. Barrett, Jessica Newman, Brandie Nonnecke, Nada Madkour, Dan Hendrycks, Evan R. Murphy, Krystal Jackson, Deepika Raman
https://arxiv.org/abs/2506.23949
Misalignment or misuse? The AGI alignment tradeoff
Max Hellrigel-Holderbaum, Leonard Dung
https://arxiv.org/abs/2506.03755 https://ar…
"Inside a plan to use AI to amplify doubts about the dangers of pollutants"
#AI #ArtificialIntelligence #Climate
Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work
Alice Qian Zhang, Ryland Shaw, Laura Dabbish, Jina Suh, Hong Shen
https://arxiv.org/abs/2505.24246
FPF and OneTrust launch updated Conformity Assessment under the EU AI Act: guide and infographic
https://fpf.org/blog/fpf-and-onetrust-launch-updated-conformity-assessment-under-the-eu-ai-act-guide-and-infographi…
This https://arxiv.org/abs/2502.14708 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_eco…
Asymmetry by Design: Boosting Cyber Defenders with Differential Access to AI
Shaun Ee, Chris Covino, Cara Labrador, Christina Krawec, Jam Kraprayoon, Joe O'Brien
https://arxiv.org/abs/2506.02035
San Diego-based Clearspeed, which offers AI-driven voice-based risk assessment tech for 60 languages, raised a $60M Series D, taking its total funding to $110M (Duncan Riley/SiliconANGLE)
https://siliconangle.com/2025/06/26/cl
This https://arxiv.org/abs/2505.23397 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csAI_…
Defiant peers have delivered an ultimatum to the government – calling on it to offer artists copyright protection against artificial intelligence companies or risk losing a key piece of legislation.
The government suffered a fifth defeat in the House of Lords over controversial plans to allow the AI companies to train their models using copyrighted material
https://www.theguardian.com/technology/2025/jun/04/ministers-offer-concessions-ai-copyright-avoid-fifth-lords-defeat?CMP=Share_iOSApp_Other
Replaced article(s) found for q-fin.RM. https://arxiv.org/list/q-fin.RM/new
[1/1]:
- Explainable AI for Comprehensive Risk Assessment for Financial Reports: A Lightweight Hierarchica...
Xue Wen Tan, Stanley Kok
Our urban #MobilityAnalytics project EMERALDS hosts its next webinar tomorrow:
🗓️ 27 June | ⏰ 14:30 CEST | 📍Online
How The Hague is using data to manage urban risks.
Register 👉
Systematic Hazard Analysis for Frontier AI using STPA
Simon Mylius
https://arxiv.org/abs/2506.01782 https://arxiv.org/pdf/2506.01782
Internal docs show Meta plans to use AI to automate up to 90% of its privacy and integrity risk assessments, including in sensitive areas like violent content (NPR)
https://www.npr.org/2025/05/31/nx-s1-5407870/meta-ai-facebook-instagram-risks…
The lethal trifecta for #AI agents: private data, untrusted content, and external communication
https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
This is quite close to a risk I spoke to someone about a couple of months ago.
Government domains might host documents containing opinions or other third-party information (consultation responses, evidence, etc) - but LLM training ingest may assume the context of the government domain and incorporate it into a government position.
> Google’s AI overview interpreted the PDF document as an official EU stance, blending the information into a response
Explainable AI for Comprehensive Risk Assessment for Financial Reports: A Lightweight Hierarchical Transformer Network Approach
Xue Wen Tan, Stanley Kok
https://arxiv.org/abs/2506.23767
Agentic AI as the enemy's agent.
It is a bad idea to allow an LLM access to internal data and external communication (web pages, APIs, email, …) at the same time.
#AgenticAI #DataLeak #LLM
Concerning the Responsible Use of AI in the US Criminal Justice System
Cristopher Moore, Catherine Gill, Nadya Bliss, Kevin Butler, Stephanie Forrest, Daniel Lopresti, Mary Lou Maher, Helena Mentis, Shashi Shekhar, Amanda Stent, Matthew Turk
https://arxiv.org/abs/2506.00212
«The singularity is near, not because machine intelligence is suddenly surging, but because we are content to risk extinguishing the spark of human consciousness by exposing ourselves to endless streams of artificially generated bullshit.»
Nice post by @…! I've definitely noticed this when interacting with people who've been too deep into using "AI".
https://ideophone.org/bringing-about-the-singularity-by-giving-up-thinking/
Silence is Golden: Leveraging Adversarial Examples to Nullify Audio Control in LDM-based Talking-Head Generation
Yuan Gan, Jiaxu Miao, Yunze Wang, Yi Yang
https://arxiv.org/abs/2506.01591
Here’s a person, Baldur Bjarnason, @…, who thinks I’m really wrong about LLMs and coding. I mostly don’t agree but the argument is well-presented: https://www.baldurbjarna…
Risk-Guided Diffusion: Toward Deploying Robot Foundation Models in Space, Where Failure Is Not An Option
Rohan Thakker, Adarsh Patnaik, Vince Kurtz, Jonas Frey, Jonathan Becktor, Sangwoo Moon, Rob Royce, Marcel Kaufmann, Georgios Georgakis, Pascal Roth, Joel Burdick, Marco Hutter, Shehryar Khattak
https://arxiv.org/abs/2506.1760…
Watermarking Without Standards Is Not AI Governance
Alexander Nemecek, Yuzhou Jiang, Erman Ayday
https://arxiv.org/abs/2505.23814 https://
The fundamental problem of risk prediction for individuals: health AI, uncertainty, and personalized medicine
Lasai Barre\~nada, Ewout W Steyerberg, Dirk Timmerman, Doranne Thomassen, Laure Wynants, Ben Van Calster
https://arxiv.org/abs/2506.17141
This https://arxiv.org/abs/2505.23436 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csAI_…
Mitigating Gambling-Like Risk-Taking Behaviors in Large Language Models: A Behavioral Economics Approach to AI Safety
Y. Du
https://arxiv.org/abs/2506.22496
A look at California's 52-page California Report on Frontier Policy, which proposes an AI regulatory framework based on transparency, risk assessments, and more (Hayden Field/The Verge)
https://www.theverge.com/ai-artificial-int
VisText-Mosquito: A Multimodal Dataset and Benchmark for AI-Based Mosquito Breeding Site Detection and Reasoning
Md. Adnanul Islam, Md. Faiyaz Abdullah Sayeedi, Md. Asaduzzaman Shuvo, Muhammad Ziaur Rahman, Shahanur Rahman Bappy, Raiyan Rahman, Swakkhar Shatabda
https://arxiv.org/abs/2506.14629
A Question Bank to Assess AI Inclusivity: Mapping out the Journey from Diversity Errors to Inclusion Excellence
Rifat Ara Shams, Didar Zowghi, Muneera Bano
https://arxiv.org/abs/2506.18538
DREAM: On hallucinations in AI-generated content for nuclear medicine imaging
Menghua Xia, Reimund Bayerlein, Yanis Chemli, Xiaofeng Liu, Jinsong Ouyang, Georges El Fakhri, Ramsey D. Badawi, Quanzheng Li, Chi Liu
https://arxiv.org/abs/2506.13995
What has ... DRM (?!) ever done for us?
https://infosec.exchange/@dangoodin/114546912959367470
Social Group Bias in AI Finance
Thomas R. Cook, Sophia Kazinnik
https://arxiv.org/abs/2506.17490 https://arxiv.org/pdf/2506.17490
This https://arxiv.org/abs/2505.18422 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csCY_…
Calculating Software's Energy Use and Carbon Emissions: A Survey of the State of Art, Challenges, and the Way Ahead
Priyavanshi Pathania, Nikhil Bamby, Rohit Mehra, Samarth Sikand, Vibhu Saujanya Sharma, Vikrant Kaulgud, Sanjay Podder, Adam P. Burden
https://arxiv.org/abs/2506.09683 …
Mapping Caregiver Needs to AI Chatbot Design: Strengths and Gaps in Mental Health Support for Alzheimer's and Dementia Caregivers
Jiayue Melissa Shi, Dong Whi Yoo, Keran Wang, Violeta J. Rodriguez, Ravi Karkar, Koustuv Saha
https://arxiv.org/abs/2506.15047
Researchers find the first known "zero-click" attack on an AI agent; the now-fixed flaw in Microsoft 365 Copilot would let a hacker attack a user via an email (Sharon Goldman/Fortune)
https://fortune.com/2025/06/11/microsoft-cop…
Last year I wrote a paper on how "GenAI" is a risk for #citizenscience (it being an academic paper it's still slowly dying in review). One of the points was that using "AI" devalues the contributions made by human volunteers, leading to contributors (rightfully) disengaging.
Yesterday, #inaturalist announced an "AI"-partnership with Google, and unsurprisingly that's exactly the backlash they got…
https://www.inaturalist.org/blog/113184-inaturalist-receives-grant-to-improve-species-suggestions
This https://arxiv.org/abs/2506.03988 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csCV_…
Programming Geotechnical Reliability Algorithms using Generative AI
Atma Sharma, Jie Zhang, Meng Lu, Shuangyi Wu, Baoxiang Li
https://arxiv.org/abs/2506.19536
This https://arxiv.org/abs/2502.09716 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csCY_…
Treefera, which uses satellite imagery, drone imagery, and AI to provide real-time insights into supply chains, raised a $30M Series B led by Notion Capital (Cate Lawrence/Tech.eu)
https://tech.eu/2025/06/03/treefera-secure…
Artificial Intelligence in Team Dynamics: Who Gets Replaced and Why?
Xienan Cheng, Mustafa Dogan, Pinar Yildirim
https://arxiv.org/abs/2506.12337 https://
This https://arxiv.org/abs/2503.01148 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_qfi…
Replaced article(s) found for cs.ET. https://arxiv.org/list/cs.ET/new/
[1/1]:
AI as Decision-Maker: Ethics and Risk Preferences of LLMs
https://
Military AI Cyber Agents (MAICAs) Constitute a Global Threat to Critical Infrastructure
Timothy Dubber, Seth Lazar
https://arxiv.org/abs/2506.12094 https:/…
Using Large Language Models to Simulate Human Behavioural Experiments: Port of Mars
Oliver Slumbers, Joel Z. Leibo, Marco A. Janssen
https://arxiv.org/abs/2506.05555
AI-based Approach in Early Warning Systems: Focus on Emergency Communication Ecosystem and Citizen Participation in Nordic Countries
Fuzel Shaik, Getnet Demil, Mourad Oussalah
https://arxiv.org/abs/2506.18926
Contemporary AI foundation models increase biological weapons risk
Roger Brent, T. Greg McKelvey Jr
https://arxiv.org/abs/2506.13798 https://
Deep Reinforcement Learning for Investor-Specific Portfolio Optimization: A Volatility-Guided Asset Selection Approach
Arishi Orra, Aryan Bhambu, Himanshu Choudhary, Manoj Thakur, Selvaraju Natarajan
https://arxiv.org/abs/2505.03760
This https://arxiv.org/abs/2502.04951 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csCR_…
Toward a Global Regime for Compute Governance: Building the Pause Button
Ananthi Al Ramiah, Raymond Koopmanschap, Josh Thorsteinson, Sadruddin Khan, Jim Zhou, Shafira Noh, Joep Meindertsma, Farhan Shafiq
https://arxiv.org/abs/2506.20530
Foundation Time-Series AI Model for Realized Volatility Forecasting
Anubha Goel, Puneet Pasricha, Martin Magris, Juho Kanniainen
https://arxiv.org/abs/2505.11163
The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI
Barbara Oakley, Michael Johnston, Ken-Zen Chen, Eulho Jung, Terrence J. Sejnowski
https://arxiv.org/abs/2506.11015
Replaced article(s) found for econ.GN. https://arxiv.org/list/econ.GN/new/
[1/1]:
AI as Decision-Maker: Ethics and Risk Preferences of LLMs
https:…