Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csLG_bot@mastoxiv.page
2025-08-15 10:21:22

Efficiently Verifiable Proofs of Data Attribution
Ari Karchmer, Seth Neel, Martin Pawelczyk
arxiv.org/abs/2508.10866 arxiv.org/pdf/2508.108…

@kurtsh@mastodon.social
2025-09-13 22:20:49

What would compel someone, with everything happening right now, to trust a tech company to host & protect your personal DNA or health data?
Ignorance? Sloth? Apathy? All 3?
From: @…

Trump is recklessly shredding our economic advantages in the global economy.
Companies that are looking to make investments shouldn’t have to worry if Trump has cooked the books on economic data--but that’s exactly what’s happening following his decision.
-- Katie Porter

@arXiv_csCR_bot@mastoxiv.page
2025-08-15 09:12:32

FIDELIS: Blockchain-Enabled Protection Against Poisoning Attacks in Federated Learning
Jane Carney, Kushal Upreti, Gaby G. Dagher, Tim Andersen
arxiv.org/abs/2508.10042

@arXiv_csAI_bot@mastoxiv.page
2025-08-15 08:19:32

A Curriculum Learning Approach to Reinforcement Learning: Leveraging RAG for Multimodal Question Answering
Chenliang Zhang, Lin Wang, Yuanyuan Lu, Yusheng Qi, Kexin Wang, Peixu Hou, Wenshi Chen
arxiv.org/abs/2508.10337

@privacity@social.linux.pizza
2025-08-07 19:11:19

FPF at PDP Week 2025: Generative AI, Digital Trust, and the Future of Cross-Border Data Transfers in APAC
fpf.org/blog/fpf-at-pdp-week-2

@jom@social.kontrollapparat.de
2025-08-13 15:49:15

Auch der #owncloud Hosting Service owncloud.online wurde nun auch weggegeben. Gab wohl noch Kunden, die nicht auf die proprietäre

@arXiv_csCR_bot@mastoxiv.page
2025-08-15 09:29:43

Invisible Watermarks, Visible Gains: Steering Machine Unlearning with Bi-Level Watermarking Design
Yuhao Sun, Yihua Zhang, Gaowen Liu, Hongtao Xie, Sijia Liu
arxiv.org/abs/2508.10065

@arXiv_csHC_bot@mastoxiv.page
2025-07-08 12:42:10

What Shapes User Trust in ChatGPT? A Mixed-Methods Study of User Attributes, Trust Dimensions, Task Context, and Societal Perceptions among University Students
Kadija Bouyzourn, Alexandra Birch
arxiv.org/abs/2507.05046

@arXiv_eessIV_bot@mastoxiv.page
2025-08-07 07:49:44

Technical specification of a framework for the collection of clinical images and data
Alistair Mackenzie (Royal Surrey NHS Foundation Trust, Guildford, UK), Mark Halling-Brown (Royal Surrey NHS Foundation Trust, Guildford, UK), Ruben van Engen (Dutch Expert Centre for Screening), Carlijn Roozemond (Dutch Expert Centre for Screening), Lucy Warren (Royal Surrey NHS Foundation Trust, Guildford, UK), Dominic Ward (Royal Surrey NHS Foundation Trust, Guildford, UK), Nadia Smith (Royal Surrey…

@metacurity@infosec.exchange
2025-09-08 13:25:56

Check out today's Metacurity for the most critical infosec developments you might have missed over the weekend, including
--Chinese espionage campaign targeted House staffers ahead of trade talks,
--Ethical hackers uncover catastrophic flaws in restaurant chain's platforms,
--Salesloft Drift hack began last March,
--Customer data stolen in Wealthsimple breach,
--Trump to formally nominate Harman for NSA/Cybercom slot,
--Don't trust XChat's e…

@kubikpixel@chaos.social
2025-06-29 20:35:09

»Life360 Secretly Sells Users’ Geolocation Data to Third Parties, Class Action Claims:
A proposed class action alleges family tracking app Life360 secretly sells data about users’ locations and movements to third parties.«
When apps are (almost) free, they make unlimited money by selling you. Don't trust any, if they don't communicate their data protection.
🕵️

@arXiv_csCR_bot@mastoxiv.page
2025-09-11 09:32:03

Membrane: A Cryptographic Access Control System for Data Lakes
Sam Kumar, Samyukta Yagati, Conor Power, David E. Culler, Raluca Ada Popa
arxiv.org/abs/2509.08740

@mia@hcommons.social
2025-07-09 09:32:52

Doesn't seem controversial. 'Palantir (inclusive of any associated companies) is an unacceptable choice of partner to create a Federated Data Platform for the NHS. It recognises that this partnership threatens to undermine public trust in NHS data systems'

@alejandrobdn@social.linux.pizza
2025-08-05 16:27:59

AWS deleted my 10-year account and all data without warning
"On July 23, 2025, AWS deleted my 10-year-old account and every byte of data I had stored with them. No warning. No grace period. No recovery options. Just complete digital annihilation".
seuros.com/blog/aws…

@arXiv_csAI_bot@mastoxiv.page
2025-09-11 07:31:02

Trust Semantics Distillation for Collaborator Selection via Memory-Augmented Agentic AI
Botao Zhu, Jeslyn Wang, Dusit Niyato, Xianbin Wang
arxiv.org/abs/2509.08151

@vosje62@mastodon.nl
2025-08-04 10:25:04

AWS deleted my 10-year account and all data without warning
seuros.com/blog/aws-deleted-my
- The Architecture That Should Have Protected Me -
Small reminder that there is no 100% protection in the…

@Techmeme@techhub.social
2025-07-31 12:25:41

Stack Overflow survey: 84% of developers use or plan to use AI tools in their workflow, up from 76% in 2024, and 33% trust AI accuracy, down from 43% in 2024 (Sean Michael Kerner/VentureBeat)
venturebeat.com/ai/stack-overf

@publicvoit@graz.social
2025-08-05 07:33:14

"The #cloud isn’t your friend. It’s a business. And when their business needs conflict with your data’s existence, guess which one wins?"
"I wasn’t alone in being targeted by AWS—especially #MENA. Hundreds of Reddit threads, websites, forums, all telling similar stories."
"

@usul@piaille.fr
2025-08-04 18:24:39

AWS deleted my 10-year account and all data without warning
seuros.com/blog/aws-deleted-my

@arXiv_csCY_bot@mastoxiv.page
2025-08-06 08:39:30

The Architecture of Trust: A Framework for AI-Augmented Real Estate Valuation in the Era of Structured Data
Petteri Teikari, Mike Jarrell, Maryam Azh, Harri Pesola
arxiv.org/abs/2508.02765

@arXiv_csCR_bot@mastoxiv.page
2025-09-15 09:31:11

Innovating Augmented Reality Security: Recent E2E Encryption Approaches
Hamish Alsop, Leandros Maglaras, Helge Janicke, Iqbal H. Sarker, Mohamed Amine Ferrag
arxiv.org/abs/2509.10313

@mgorny@social.treehouse.systems
2025-08-31 11:47:07

Today, using #GAFAM software is no longer a matter of needs or preferences. Today, it is a choice in the domain of ethics.
If someone gives me their personal data, be it their contact data, image, or anything else, it is my solemn duty to keep that data secure. If I give it away to an app that uses it for marketing purposes, to train models, to manipulate people, or simply sells it, I fail that trust.
So don't give apps unnecessary permissions, or simply don't use such apps. And if the whole system abuses your data — well, if you can't change it, there are always notebooks and pens, you know.
#FreeSoftware

@arXiv_astrophHE_bot@mastoxiv.page
2025-09-10 09:00:31

When (not) to trust Monte Carlo approximations for hierarchical Bayesian inference
Jack Heinzel, Salvatore Vitale
arxiv.org/abs/2509.07221

@ErikJonker@mastodon.social
2025-07-07 14:02:38

Just wondering when an european "Linkedin" will start or if there is already one available? We clearly can't trust a US company with all our personal data, relations, employers etc. ?
#linkedin #eu #privacy

@newsie@darktundra.xyz
2025-09-04 12:13:21

Czech cyber agency warns against using services and products that send data to China therecord.media/czech-nukib-wa

@berlinbuzzwords@floss.social
2025-09-09 11:00:04

Climate Policy Radar is developing an open-source knowledge graph for climate policy. At Berlin Buzzwords, Harrison Pim and Fred O'Loughlin discussed how they combine in-house expertise with a scalable data infrastructure in order to identify key concepts within thousands of global climate policy documents.
Watch the full session: youtu.be/H6BhF6zSvp4?si=Jziout
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: tickets.plainschwarz.com/bbuzz

@mszll@datasci.social
2025-09-06 18:10:17

Very interesting looking paper: In tech we trust: A history of technophilia in the Intergovernmental Panel on Climate Change's (IPCC) climate mitigation expertise
sciencedirect.com/science/arti
Hopefully someone wil…

Frequency per page of selected terms related to key mitigation strategies in the six full reports of IPCC Working Group III published between 1992 and 2022. The data highlights the overwhelming dominance of the term “technology”, particularly in the third and fourth assessment reports (2001 and 2007), where it peaked at over 1.5 mentions per page. In contrast, demand-side concepts such as “lifestyle”, “behavioural change”, and “sufficiency” remained marginal until a notable increase in the 2022…
@arXiv_csSE_bot@mastoxiv.page
2025-09-10 07:49:31

Aspect-Oriented Programming in Secure Software Development: A Case Study of Security Aspects in Web Applications
Mterorga Ukor
arxiv.org/abs/2509.07449

@arXiv_csLG_bot@mastoxiv.page
2025-08-12 11:40:03

Improving Real-Time Concept Drift Detection using a Hybrid Transformer-Autoencoder Framework
N Harshit, K Mounvik
arxiv.org/abs/2508.07085

@servelan@newsie.social
2025-07-20 01:47:30

23andMe's Data Sold to Nonprofit Run by Its Co-Founder - 'And I Still Don't Trust It' - Slashdot
science.slashdot.org/story/25/

@arXiv_csCV_bot@mastoxiv.page
2025-08-04 10:10:01

Minimum Data, Maximum Impact: 20 annotated samples for explainable lung nodule classification
Luisa Gall\'ee, Catharina Silvia Lisson, Christoph Gerhard Lisson, Daniela Drees, Felix Weig, Daniel Vogele, Meinrad Beer, Michael G\"otz
arxiv.org/abs/2508.00639

@Mediagazer@mstdn.social
2025-07-05 16:35:57

[Thread] an NYT standards editor responds to criticism of its recent Zohran Mamdani story, says the "ultimate source" was Columbia data that Mamdani confirmed (Patrick Healy/@patrickhealynyt)
x.com/patrickhealynyt/status/1

@arXiv_csSI_bot@mastoxiv.page
2025-08-20 09:25:00

Trust and Reputation in Data Sharing: A Survey
Wenbo Wu, George Konstantinidis
arxiv.org/abs/2508.14028 arxiv.org/pdf/2508.14028

@fanf@mendeddrum.org
2025-08-24 08:42:03

from my link log —
Cracking the Vault: flaws in authentication, identity, and authorization in HashiCorp Vault.
cyata.ai/blog/cracking-the-vau

@arXiv_csHC_bot@mastoxiv.page
2025-08-05 11:11:40

Effect of AI Performance, Risk Perception, and Trust on Human Dependence in Deepfake Detection AI system
Yingfan Zhou, Ester Chen, Manasa Pisipati, Aiping Xiong, Sarah Rajtmajer
arxiv.org/abs/2508.01906

@marcel@waldvogel.family
2025-06-23 15:36:12

Dorothea Baur reflecting on #AI tech bros going all in on even your most personal data. Claiming to help you solve a problem which they helped create in the beginning:
"A breach of trust enabled by AI now becomes the justification for surveillance-based trust systems. And the very people who helped break the system are offering to fix it – in exchange for your iris. That’s not a safety fe…

@arXiv_csNI_bot@mastoxiv.page
2025-07-02 09:09:30

Toward Edge General Intelligence with Multiple-Large Language Model (Multi-LLM): Architecture, Trust, and Orchestration
Haoxiang Luo, Yinqiu Liu, Ruichen Zhang, Jiacheng Wang, Gang Sun, Dusit Niyato, Hongfang Yu, Zehui Xiong, Xianbin Wang, Xuemin Shen
arxiv.org/abs/2507.00672

@arXiv_qbioQM_bot@mastoxiv.page
2025-07-24 08:16:29

Machine learning-based multimodal prognostic models integrating pathology images and high-throughput omic data for overall survival prediction in cancer: a systematic review
Charlotte Jennings (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Andrew Broad (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Lucy Godson (National Pathology Imaging Cooperative, Leeds Teaching Hospitals NHS Trust, Leeds, UK), Emily…

@Techmeme@techhub.social
2025-06-19 04:50:59

Scale AI emphasizes that it remains an independent company and says Meta will not have access to Scale's internal systems or customers' confidential information (Scale AI)
scale.com/blog/customer-trust-

@aardrian@toot.cafe
2025-06-19 17:27:33

Most viewed from what? People who rarely log in? People who rarely post? People named Adrian?
Sorry, LinkedIn, not buying into your bullshit to squeeze yet more personal data out of me.
Maybe do something to boost trust?

LinkedIn notification with my profile image: “You have one of the most-viewed profiles. Add verification to boost trust.”
@vosje62@mastodon.nl
2025-07-02 20:09:06

Even de naam vsn een van de personages van Paulus de boskabouter opzoeken.
personages Paulus de boskabouter – Qwant
qwant.com/
@…
/2

Zoeken naar personages Paulus de boskabouter
@tiotasram@kolektiva.social
2025-08-04 15:49:00

Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2

@berlinbuzzwords@floss.social
2025-09-08 11:00:15

At Berlin Buzzwords 2025, Viola Rädle & Raphael Franke presented gamma_flow, an open-source Python package for real-time spectral data analysis.
Watch the full session: youtu.be/cDCtdStMWuc?si=FyrOSJ
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: tickets.plainschwarz.com/bbuzz

@arXiv_csDC_bot@mastoxiv.page
2025-08-05 10:03:20

Fully Decentralised Consensus for Extreme-scale Blockchain
Siamak Abdi, Giuseppe Di Fatta, Atta Badii, Giancarlo Fortino
arxiv.org/abs/2508.02595

@arXiv_eessSY_bot@mastoxiv.page
2025-07-25 08:01:12

Trusted Data Fusion, Multi-Agent Autonomy, Autonomous Vehicles
R. Spencer Hallyburton, Miroslav Pajic
arxiv.org/abs/2507.17875 arxiv.org/pd…

@mariyadelano@hachyderm.io
2025-07-21 19:00:54

Oh no it happened - client for a research project I’m working on got upset that we’re doing manual data analysis of survey responses, and complained about why we are so slow when their internal team working on a different report got “everything done in a couple of days with #AI tools”
And then they told us that waiting for proper human analysis is a “waste of time” and that we need to just chuck our dataset into AI and “get it over with”
I really don’t know what to do right now 🥲
Trying to do this properly on their expected timeline will mean very little sleep for multiple days, but giving up on the project quality and dumping it into AI is will make this entire project a waste of time. (As I wouldn’t be able to trust the output of the analysis, or be proud of it to showcase the final report as an example of our work, and not to mention that I don’t want to support this expectation to rush everything at work with these AI models)

@arXiv_csDL_bot@mastoxiv.page
2025-08-05 09:24:00

A Global South Strategy for Evaluating Research Value with ChatGPT
Robin Nunkoo, Mike Thelwall
arxiv.org/abs/2508.01882 arxiv.org/pdf/2508.…

@publicvoit@graz.social
2025-07-21 15:54:31

#Microsoft outsourced administration of classified #DoD data to cheap workers in #China. 🇨🇳 🕵️
My latest update on

@arXiv_csCY_bot@mastoxiv.page
2025-06-18 08:11:15

The Synthetic Mirror -- Synthetic Data at the Age of Agentic AI
Marcelle Momha
arxiv.org/abs/2506.13818 arxiv.org/pdf…

Folks are getting a wee bit more concerned about their privacy now that Donald Trump is in charge of the US.
You may have noticed that he and his regime love getting their hands on other people's data.
Privacy isn't the only issue. -- Can you trust Microsoft to deliver on its service promises under American political pressure?
Ask the EU-based International Criminal Court (ICC) which after it issued arrest warrants for Israeli Prime Minister Benjamin Netanyahu fo…

@arXiv_csHC_bot@mastoxiv.page
2025-08-26 11:03:36

TRUCE-AV: A Multimodal dataset for Trust and Comfort Estimation in Autonomous Vehicles
Aditi Bhalla, Christian Hellert, Enkelejda Kasneci, Nastassja Becker
arxiv.org/abs/2508.17880

@arXiv_csAI_bot@mastoxiv.page
2025-07-04 09:16:01

Do Role-Playing Agents Practice What They Preach? Belief-Behavior Consistency in LLM-Based Simulations of Human Trust
Amogh Mannekote, Adam Davies, Guohao Li, Kristy Elizabeth Boyer, ChengXiang Zhai, Bonnie J Dorr, Francesco Pinto
arxiv.org/abs/2507.02197

@arXiv_qfinST_bot@mastoxiv.page
2025-07-04 08:21:01

News Sentiment Embeddings for Stock Price Forecasting
Ayaan Qayyum
arxiv.org/abs/2507.01970 arxiv.org/pdf/2507.01970

@arXiv_nuclth_bot@mastoxiv.page
2025-08-28 08:28:11

Weighted Levenberg-Marquardt methods for fitting multichannel nuclear cross section data
M. Imbri\v{s}ak, A. E. Lovell, M. R. Mumpower
arxiv.org/abs/2508.19468

@aardrian@toot.cafe
2025-07-28 20:06:05

Hey, I’m not a developer but I totally vibe-coded this game you have to install on your system and which I cannot confirm has no security holes and won’t exfiltrate your data because I’m still not a developer but trust me and install it ideally with root / admin privileges!

@arXiv_csCR_bot@mastoxiv.page
2025-08-26 11:01:46

An Efficient Recommendation Filtering-based Trust Model for Securing Internet of Things
Muhammad Ibn Ziauddin, Rownak Rahad Rabbi, SM Mehrab, Fardin Faiyaz, Mosarrat Jahan
arxiv.org/abs/2508.17304

@rigo@mamot.fr
2025-06-16 13:51:29

At the CTIF workshop in Brussels on copyright infrastructure, a cooperative project of the Finnish and the Estonian governments. I will talk about licensing linked data for trust in media

@berlinbuzzwords@floss.social
2025-08-28 11:00:14

Data warehouses, lakes, lakehouses, and more – the choices we make profoundly impact operational costs and development speed. At Berlin Buzzwords, Lars Albertsson dived into how common operational challenges like deployment, failure handling, and data quality are affected by different data processing paradigms.
Watch the full session: youtu.be/uev_27z3-1s?si=tXTIQ8
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: tickets.plainschwarz.com/bbuzz

@arXiv_astrophSR_bot@mastoxiv.page
2025-08-29 09:33:21

Unveiling the Variability and Chemical Composition of AL Col
Surath C. Ghosh, Santosh Joshi, Samrat Ghosh, Athul Dileep, Otto Trust, Mrinmoy Sarkar, Jaime Andr\'es Rosales Guzm\'an, Nicol\'as Esteban Castro-Toledo, Oleg Malkov, Harinder P. Singh, Kefeng Tan, Sarabjeet S. Bedi
arxiv.org/abs/2508.20681

@arXiv_econGN_bot@mastoxiv.page
2025-07-23 08:07:02

Measuring the Unmeasurable? Systematic Evidence on Scale Transformations in Subjective Survey Data
Caspar Kaiser, Anthony Lepinteur
arxiv.org/abs/2507.16440

@vosje62@mastodon.nl
2025-06-20 14:55:09

Had ik jullie al verteld dat Quant best regelmatig met weinig resultaten al gelijk uitstekende resultaten voorschotelt?
Qwant - De [Franse] zoekmachine met respect voor uw privacy
#Quant

@arXiv_csLG_bot@mastoxiv.page
2025-08-25 09:55:10

NOSTRA: A noise-resilient and sparse data framework for trust region based multi objective Bayesian optimization
Maryam Ghasemzadeh, Anton van Beek
arxiv.org/abs/2508.16476

@arXiv_eessIV_bot@mastoxiv.page
2025-06-23 11:42:10

Robust Training with Data Augmentation for Medical Imaging Classification
Josu\'e Mart\'inez-Mart\'inez, Olivia Brown, Mostafa Karami, Sheida Nabavi
arxiv.org/abs/2506.17133

@arXiv_csCR_bot@mastoxiv.page
2025-07-04 09:06:31

Rethinking Broken Object Level Authorization Attacks Under Zero Trust Principle
Anbin Wu (The College of Intelligence and Computing, Tianjin University), Zhiyong Feng (The College of Intelligence and Computing, Tianjin University), Ruitao Feng (The Southern Cross University)
arxiv.org/abs/2507.02309

@arXiv_csHC_bot@mastoxiv.page
2025-06-25 08:34:50

HARPT: A Corpus for Analyzing Consumers' Trust and Privacy Concerns in Mobile Health Apps
Timoteo Kelly, Abdulkadir Korkmaz, Samuel Mallet, Connor Souders, Sadra Aliakbarpour, Praveen Rao
arxiv.org/abs/2506.19268

@arXiv_csIR_bot@mastoxiv.page
2025-07-23 08:02:22

Biases in LLM-Generated Musical Taste Profiles for Recommendation
Bruno Sguerra, Elena V. Epure, Harin Lee, Manuel Moussallam
arxiv.org/abs/2507.16708

@arXiv_csLG_bot@mastoxiv.page
2025-08-27 10:34:33

Breaking the Black Box: Inherently Interpretable Physics-Informed Machine Learning for Imbalanced Seismic Data
Vemula Sreenath, Filippo Gatti, Pierre Jehel
arxiv.org/abs/2508.19031

@arXiv_csCV_bot@mastoxiv.page
2025-06-17 09:35:27

Understanding and Benchmarking the Trustworthiness in Multimodal LLMs for Video Understanding
Youze Wang, Zijun Chen, Ruoyu Chen, Shishen Gu, Yinpeng Dong, Hang Su, Jun Zhu, Meng Wang, Richang Hong, Wenbo Hu
arxiv.org/abs/2506.12336

@arXiv_csHC_bot@mastoxiv.page
2025-06-30 07:57:40

Validation of the MySurgeryRisk Algorithm for Predicting Complications and Death after Major Surgery: A Retrospective Multicenter Study Using OneFlorida Data Trust
Yuanfang Ren, Esra Adiyeke, Ziyuan Guan, Zhenhong Hu, Mackenzie J Meni, Benjamin Shickel, Parisa Rashidi, Tezcan Ozrazgat-Baslanti, Azra Bihorac
arxiv.org/abs…

@berlinbuzzwords@floss.social
2025-09-02 11:00:07

LLMs are now part of our daily work, making coding easier. At Berlin Buzzwords 2025, Ivan Dolgov discussed how they built an in-house LLM for AI code completion in JetBrains products, covering design choices, data preparation, training and model evaluation.
Watch the full session: youtu.be/yLHCwi_mgvQ?si=CpW8bG
Berlin Buzzwords returns on 7-9 June 2026! Get 36% off with our Trust Us Ticket: tickets.plainschwarz.com/bbuzz

@arXiv_astrophSR_bot@mastoxiv.page
2025-08-29 09:17:31

Asteroseismology of HD 23734, HD 68703, and HD 73345 using K2-TESS Space-based Photometry and High-resolution Spectroscopy
Santosh Joshi, Athul Dileep, Eugene Semenko, Mrinmoy Sarkar, Otto Trust, Peter De Cat, Patricia Lampens, Marc-Antoine Dupret, Surath C. Ghosh, David Mkrtichian, Mathijs Vanrespaille, Sugyan Parida, Abhay Pratap Yadav, Pramod Kumar S., P. P. Goswami, Muhammed Riyas, Drisya Karinkuzhi

@arXiv_csAI_bot@mastoxiv.page
2025-08-27 10:10:23

VISION: Robust and Interpretable Code Vulnerability Detection Leveraging Counterfactual Augmentation
David Egea, Barproda Halder, Sanghamitra Dutta
arxiv.org/abs/2508.18933

@arXiv_csCR_bot@mastoxiv.page
2025-08-25 07:32:30

Implementing Zero Trust Architecture to Enhance Security and Resilience in the Pharmaceutical Supply Chain
Saeid Ghasemshirazi, Ghazaleh Shirvani, Marziye Ranjbar Tavakoli, Bahar Ghaedi, Mohammad Amin Langarizadeh
arxiv.org/abs/2508.15776

@arXiv_csCY_bot@mastoxiv.page
2025-06-18 08:11:50

Towards an Approach for Evaluating the Impact of AI Standards
Julia Lane
arxiv.org/abs/2506.13839 arxiv.org/pdf/2506.…

@arXiv_csCR_bot@mastoxiv.page
2025-08-19 11:32:50

Data-driven Trust Bootstrapping for Mobile Edge Computing-based Industrial IoT Services
Prabath Abeysekara, Hai Dong
arxiv.org/abs/2508.12560

@arXiv_csAI_bot@mastoxiv.page
2025-07-01 11:31:03

Attestable Audits: Verifiable AI Safety Benchmarks Using Trusted Execution Environments
Christoph Schnabl, Daniel Hugenroth, Bill Marino, Alastair R. Beresford
arxiv.org/abs/2506.23706

@arXiv_csCR_bot@mastoxiv.page
2025-07-29 09:45:31

"Blockchain-Enabled Zero Trust Framework for Securing FinTech Ecosystems Against Insider Threats and Cyber Attacks"
Avinash Singh, Vikas Pareek, Asish Sharma
arxiv.org/abs/2507.19976

@arXiv_csSI_bot@mastoxiv.page
2025-07-21 09:07:10

Characterizing the Dynamics of Conspiracy Related German Telegram Conversations during COVID-19
Elisabeth H\"oldrich, Mathias Angermaier, Jana Lasser, Joao Pinheiro-Neto
arxiv.org/abs/2507.13398

@arXiv_csCR_bot@mastoxiv.page
2025-07-29 09:09:31

Trivial Trojans: How Minimal MCP Servers Enable Cross-Tool Exfiltration of Sensitive Data
Nicola Croce, Tobin South
arxiv.org/abs/2507.19880

@arXiv_econGN_bot@mastoxiv.page
2025-06-23 08:37:50

Social Media Can Reduce Misinformation When Public Scrutiny is High
Gavin Wang, Haofei Qin, Xiao Tang, Lynn Wu
arxiv.org/abs/2506.16355

@arXiv_csHC_bot@mastoxiv.page
2025-06-24 11:25:20

Patient-Centred Explainability in IVF Outcome Prediction
Adarsa Sivaprasad, Ehud Reiter, David McLernon, Nava Tintarev, Siladitya Bhattacharya, Nir Oren
arxiv.org/abs/2506.18760

@arXiv_csCY_bot@mastoxiv.page
2025-06-24 10:34:50

Public Perceptions of Autonomous Vehicles: A Survey of Pedestrians and Cyclists in Pittsburgh
Rudra Y. Bedekar
arxiv.org/abs/2506.17513

@arXiv_csCR_bot@mastoxiv.page
2025-07-04 10:00:01

NVIDIA GPU Confidential Computing Demystified
Zhongshu Gu, Enriquillo Valdez, Salman Ahmed, Julian James Stephen, Michael Le, Hani Jamjoom, Shixuan Zhao, Zhiqiang Lin
arxiv.org/abs/2507.02770

@arXiv_csCY_bot@mastoxiv.page
2025-07-22 09:52:10

Mining Voter Behaviour and Confidence: A Rule-Based Analysis of the 2022 U.S. Elections
Md Al Jubair, Mohammad Shamsul Arefin, Ahmed Wasif Reza
arxiv.org/abs/2507.14236

@arXiv_csAI_bot@mastoxiv.page
2025-08-21 07:32:39

The Agent Behavior: Model, Governance and Challenges in the AI Digital Age
Qiang Zhang, Pei Yan, Yijia Xu, Chuanpo Fu, Yong Fang, Yang Liu
arxiv.org/abs/2508.14415

@arXiv_csHC_bot@mastoxiv.page
2025-08-28 09:44:01

Towards a Real-Time Warning System for Detecting Inaccuracies in Photoplethysmography-Based Heart Rate Measurements in Wearable Devices
Rania Islmabouli, Marlene Brunner, Devender Kumar, Mahdi Sareban, Gunnar Treff, Michael Neudorfer, Josef Niebauer, Arne Bathke, Jan David Smeddinck
arxiv.org/abs/2508.19818

@arXiv_csCR_bot@mastoxiv.page
2025-08-20 09:12:40

Beneath the Mask: Can Contribution Data Unveil Malicious Personas in Open-Source Projects?
Ruby Nealon
arxiv.org/abs/2508.13453 arxiv.org/p…

@arXiv_csCY_bot@mastoxiv.page
2025-08-19 10:42:20

Developing a Responsible AI Framework for Healthcare in Low Resource Countries: A Case Study in Nepal and Ghana
Hari Krishna Neupane, Bhupesh Kumar Mishra
arxiv.org/abs/2508.12389

@arXiv_csCR_bot@mastoxiv.page
2025-07-31 09:10:41

Large Language Model-Based Framework for Explainable Cyberattack Detection in Automatic Generation Control Systems
Muhammad Sharshar, Ahmad Mohammad Saber, Davor Svetinovic, Amr M. Youssef, Deepa Kundur, Ehab F. El-Saadany
arxiv.org/abs/2507.22239

@arXiv_csCR_bot@mastoxiv.page
2025-06-26 09:36:20

Autonomous Cyber Resilience via a Co-Evolutionary Arms Race within a Fortified Digital Twin Sandbox
Malikussaid, Sutiyo
arxiv.org/abs/2506.20102