Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csAI_bot@mastoxiv.page
2025-08-19 09:20:39

MAPF-World: Action World Model for Multi-Agent Path Finding
Zhanjiang Yang, Meng Li, Yang Shen, Yueming Li, Lijun Sun
arxiv.org/abs/2508.12087

@tante@tldr.nettime.org
2025-09-15 12:00:05

"The understanding that the total supremacy of the “data” discourse was always a problematic, neoliberal way of seeing and structuring the world, of legitimizing violence according to the needs of those in power."
(Original title: The “Data” Narrative eats itself)
tante.cc/2025/09/15/…

@arXiv_csCV_bot@mastoxiv.page
2025-07-18 10:19:42

Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models
Arian Mousakhan, Sudhanshu Mittal, Silvio Galesso, Karim Farid, Thomas Brox
arxiv.org/abs/2507.13162

@arXiv_csCL_bot@mastoxiv.page
2025-08-19 11:38:40

Leveraging Large Language Models for Predictive Analysis of Human Misery
Bishanka Seal, Rahul Seetharaman, Aman Bansal, Abhilash Nandy
arxiv.org/abs/2508.12669

@midtsveen@social.linux.pizza
2025-09-16 15:14:19

I have to agree with Hofmann, LSD really lets you connect with nature in a way that’s unlike anything else. It’s not just seeing trees or animals differently, it’s like the walls between you and the world around you come down, and you feel everything’s alive and connected.
That experience makes it impossible to ignore how fragile and important all of this is, the plants, animals, the earth itself, and us too. It’s a reminder that we’re part of something bigger, and that caring for natu…

Black and white photo of Albert Hofmann in a lab, holding a molecular model. Beside the photo is a quote: 'Through my LSD experience and my new picture of reality, I became aware of the wonder of creation, the magnificence of nature and of the animal and plant kingdom. I became very sensitive to what will happen to all this and all of us.' Albert Hofmann.
@arXiv_csSE_bot@mastoxiv.page
2025-09-18 09:38:41

Who is Introducing the Failure? Automatically Attributing Failures of Multi-Agent Systems via Spectrum Analysis
Yu Ge (Nanjing University), Linna Xie (Nanjing University), Zhong Li (Nanjing University), Yu Pei (The Hong Kong Polytechnic University), Tian Zhang (Nanjing University)
arxiv.org/abs/2509.13782

@arXiv_csSD_bot@mastoxiv.page
2025-09-18 07:46:21

A Domain Knowledge Informed Approach for Anomaly Detection of Electric Vehicle Interior Sounds
Deepti Kunte, Bram Cornelis, Claudio Colangeli, Karl Janssens, Brecht Van Baelen, Konstantinos Gryllias
arxiv.org/abs/2509.13390

@arXiv_csRO_bot@mastoxiv.page
2025-08-19 10:06:20

Control of Legged Robots using Model Predictive Optimized Path Integral
Hossein Keshavarz, Alejandro Ramirez-Serrano, Majid Khadiv
arxiv.org/abs/2508.11917

@arXiv_eessAS_bot@mastoxiv.page
2025-09-18 09:05:11

Summary on The Multilingual Conversational Speech Language Model Challenge: Datasets, Tasks, Baselines, and Methods
Bingshen Mu, Pengcheng Guo, Zhaokai Sun, Shuai Wang, Hexin Liu, Mingchen Shao, Lei Xie, Eng Siong Chng, Longshuai Xiao, Qiangze Feng, Daliang Wang
arxiv.org/abs/2509.13785

@arXiv_eessSY_bot@mastoxiv.page
2025-09-18 09:51:51

Large Language Model-Empowered Decision Transformer for UAV-Enabled Data Collection
Zhixion Chen, Jiangzhou Wang, and Hyundong Shin, Arumugam Nallanathan
arxiv.org/abs/2509.13934

@arXiv_csIR_bot@mastoxiv.page
2025-08-19 08:21:20

A Large-Scale Web Search Dataset for Federated Online Learning to Rank
Marcel Gregoriadis, Jingwei Kang, Johan Pouwelse
arxiv.org/abs/2508.12353

@arXiv_csNE_bot@mastoxiv.page
2025-09-17 08:17:30

A Neuromorphic Model of Learning Meaningful Sequences with Long-Term Memory
Laxmi R. Iyer, Ali A. Minai
arxiv.org/abs/2509.12850 arxiv.org/…

@arXiv_condmatsoft_bot@mastoxiv.page
2025-09-19 09:30:31

A General Model for Static Contact Angles
Carlos E Colosqui
arxiv.org/abs/2509.14692 arxiv.org/pdf/2509.14692

@arXiv_csGR_bot@mastoxiv.page
2025-09-18 08:24:31

Hyperspectral Polarimetric BRDFs of Real-world Materials
Yunseong Moon, Ryota Maeda, Suhyun Shin, Inseung Hwang, Youngchan Kim, Min H. Kim, Seung-Hwan Baek
arxiv.org/abs/2509.13779

@arXiv_csCL_bot@mastoxiv.page
2025-09-18 08:49:31

Sparse Neurons Carry Strong Signals of Question Ambiguity in LLMs
Zhuoxuan Zhang, Jinhao Duan, Edward Kim, Kaidi Xu
arxiv.org/abs/2509.13664

@arXiv_nlinAO_bot@mastoxiv.page
2025-08-18 07:56:30

When higher-order interactions enhance synchronization: the case of the Kuramoto model
Riccardo Muolo, Hiroya Nakao, Marco Coraggio
arxiv.org/abs/2508.10992

@arXiv_csCV_bot@mastoxiv.page
2025-07-18 10:22:32

VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning
Senqiao Yang, Junyi Li, Xin Lai, Bei Yu, Hengshuang Zhao, Jiaya Jia
arxiv.org/abs/2507.13348

@arXiv_csSI_bot@mastoxiv.page
2025-09-17 09:23:49

A Pressure-Based Diffusion Model for Influence Maximization on Social Networks
Curt Stutsman, Eliot W. Robson, Abhishek K. Umrawal
arxiv.org/abs/2509.12822

@Techmeme@techhub.social
2025-07-16 09:02:14

A look at the Chile-led Latam-GPT project, which involves 30 Latin American and Caribbean institutions collaborating to release an open-source LLM in September (Cristišn Vera-Cruz/Rest of World)
restofworld.org/2025/chatgpt-l

@arXiv_hepph_bot@mastoxiv.page
2025-09-17 09:54:10

Probabilities in Toy Regge models with odderons
M. A. Braun
arxiv.org/abs/2509.12819 arxiv.org/pdf/2509.12819

@tiotasram@kolektiva.social
2025-07-30 17:56:35

Just read this post by @… on an optimistic AGI future, and while it had some interesting and worthwhile ideas, it's also in my opinion dangerously misguided, and plays into the current AGI hype in a harmful way.
social.coop/@eloquence/1149406
My criticisms include:
- Current LLM technology has many layers, but the biggest most capable models are all tied to corporate datacenters and require inordinate amounts of every and water use to run. Trying to use these tools to bring about a post-scarcity economy will burn up the planet. We urgently need more-capable but also vastly more efficient AI technologies if we want to use AI for a post-scarcity economy, and we are *not* nearly on the verge of this despite what the big companies pushing LLMs want us to think.
- I can see that permacommons.org claims a small level of expenses on AI equates to low climate impact. However, given current deep subsidies on place by the big companies to attract users, that isn't a great assumption. The fact that their FAQ dodges the question about which AI systems they use isn't a great look.
- These systems are not free in the same way that Wikipedia or open-source software is. To run your own model you need a data harvesting & cleaning operation that costs millions of dollars minimum, and then you need millions of dollars worth of storage & compute to train & host the models. Right now, big corporations are trying to compete for market share by heavily subsidizing these things, but it you go along with that, you become dependent on them, and you'll be screwed when they jack up the price to a profitable level later. I'd love to see open dataset initiatives SBD the like, and there are some of these things, but not enough yet, and many of the initiatives focus on one problem while ignoring others (fine for research but not the basis for a society yet).
- Between the environmental impacts, the horrible labor conditions and undercompensation of data workers who filter the big datasets, and the impacts of both AI scrapers and AI commons pollution, the developers of the most popular & effective LLMs have a lot of answer for. This project only really mentions environmental impacts, which makes me think that they're not serious about ethics, which in turn makes me distrustful of the whole enterprise.
- Their language also ends up encouraging AI use broadly while totally ignoring several entire classes of harm, so they're effectively contributing to AI hype, especially with such casual talk of AGI and robotics as if embodied AGI were just around the corner. To be clear about this point: we are several breakthroughs away from AGI under the most optimistic assumptions, and giving the impression that those will happen soon plays directly into the hands of the Sam Altmans of the world who are trying to make money off the impression of impending huge advances in AI capabilities. Adding to the AI hype is irresponsible.
- I've got a more philosophical criticism that I'll post about separately.
I do think that the idea of using AI & other software tools, possibly along with robotics and funded by many local cooperatives, in order to make businesses obsolete before they can do the same to all workers, is a good one. Get your local library to buy a knitting machine alongside their 3D printer.
Lately I've felt too busy criticizing AI to really sit down and think about what I do want the future to look like, even though I'm a big proponent of positive visions for the future as a force multiplier for criticism, and this article is inspiring to me in that regard, even if the specific project doesn't seem like a good one.

@arXiv_qfinPR_bot@mastoxiv.page
2025-09-18 08:13:21

Valuation of Exotic Options and Counterparty Games Based on Conditional Diffusion
Helin Zhao, Junchi Shen
arxiv.org/abs/2509.13374 arxiv.or…

@arXiv_eessAS_bot@mastoxiv.page
2025-08-18 08:46:20

MoE-TTS: Enhancing Out-of-Domain Text Understanding for Description-based TTS via Mixture-of-Experts
Heyang Xue, Xuchen Song, Yu Tang, Jianyu Chen, Yanru Chen, Yang Li, Yahui Zhou
arxiv.org/abs/2508.11326

@arXiv_csRO_bot@mastoxiv.page
2025-09-17 10:41:40

Empowering Multi-Robot Cooperation via Sequential World Models
Zijie Zhao, Honglei Guo, Shengqian Chen, Kaixuan Xu, Bo Jiang, Yuanheng Zhu, Dongbin Zhao
arxiv.org/abs/2509.13095

@khalidabuhakmeh@mastodon.social
2025-09-09 20:01:36

Before you spend $59 on that iPhone strap, consider giving the same amount to the World Central Kitchen. They help feed people in a humanitarian crisis around the world.
Many thanks for considering my request.
wck.org/

@arXiv_eessSY_bot@mastoxiv.page
2025-07-17 09:30:10

Learning, fast and slow: a two-fold algorithm for data-based model adaptation
Laura Boca de Giuli, Alessio La Bella, Riccardo Scattolini
arxiv.org/abs/2507.12187

@arXiv_csCL_bot@mastoxiv.page
2025-08-19 11:41:00

CRED-SQL: Enhancing Real-world Large Scale Database Text-to-SQL Parsing through Cluster Retrieval and Execution Description
Shaoming Duan, Zirui Wang, Chuanyi Liu, Zhibin Zhu, Yuhao Zhang, Peiyi Han, Liang Yan, Zewu Penge
arxiv.org/abs/2508.12769

@Techmeme@techhub.social
2025-07-16 08:28:24

Tokopedia sellers say Tokopedia's strengths have eroded since its TikTok Shop merger in Indonesia, driving thousands of sellers to join rivals, including Toco (Michelle Anindya/Rest of World)
restofworld.org/2025/tiktok-in

@arXiv_hepex_bot@mastoxiv.page
2025-07-14 08:35:52

Search for High-Energy Neutrinos From the Sun Using Ten Years of IceCube Data
Abbasi, Ackermann, Adams, Agarwalla, Aguilar, Ahlers, Alameddine, Ali, Amin, Andeen, Arg\"uelles, Ashida, Athanasiadou, Axani, Babu, Bai, Baines-Holmes, V., Barwick, Bash, Basu, Bay, Beatty, Tjus, Behrens, Beise, Bellenghi, Benkel, BenZvi, Berley, Bernardini, Besson, Blaufuss, Bloom, Blot, Bodo, Bontempo, Motzkin, Meneguolo, B\"oser, Botner, B\"ottcher, Braun, Brinson, Brisson-Tsavoussis, Burle…

@arXiv_eessIV_bot@mastoxiv.page
2025-09-16 08:52:57

MIDOG 2025 Track 2: A Deep Learning Model for Classification of Atypical and Normal Mitotic Figures under Class and Hardness Imbalances
Sujatha Kotte, Vangala Govindakrishnan Saipradeep, Vidushi Walia, Dhandapani Nandagopal, Thomas Joseph, Naveen Sivadasan, Bhagat Singh Lali
arxiv.org/abs/2509.10502

@arXiv_csNI_bot@mastoxiv.page
2025-09-17 09:02:20

State Aware Traffic Generation for Real-Time Network Digital Twins
Enes Koktas, Peter Rost
arxiv.org/abs/2509.12860 arxiv.org/pdf/2509.1286…

@adulau@infosec.exchange
2025-07-08 08:57:00

VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification.
This paper presents VLAI, a transformer-based model that predicts software vulnerability severity levels directly from text descriptions. Built on RoBERTa, VLAI is fine-tuned on over 600,000 real-world vulnerabilities and achieves over 82% accuracy in predicting severity categories, enabling faster and more consistent triage ahead of manual CVSS scoring. The model and dataset are open-source and integrated…

@arXiv_csCR_bot@mastoxiv.page
2025-07-16 10:00:11

LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification
Fengxiao Tang, Huan Li, Ming Zhao, Zongzong Wu, Shisong Peng, Tao Yin
arxiv.org/abs/2507.11310

@arXiv_csLG_bot@mastoxiv.page
2025-08-15 10:08:12

Driving Accurate Allergen Prediction with Protein Language Models and Generalization-Focused Evaluation
Brian Shing-Hei Wong, Joshua Mincheol Kim, Sin-Hang Fung, Qing Xiong, Kelvin Fu-Kiu Ao, Junkang Wei, Ran Wang, Dan Michelle Wang, Jingying Zhou, Bo Feng, Alfred Sze-Lok Cheng, Kevin Y. Yip, Stephen Kwok-Wing Tsui, Qin Cao
arxiv.o…

@arXiv_csCY_bot@mastoxiv.page
2025-09-08 07:46:39

The LLM Has Left The Chat: Evidence of Bail Preferences in Large Language Models
Danielle Ensign, Henry Sleight, Kyle Fish
arxiv.org/abs/2509.04781

@arXiv_csNE_bot@mastoxiv.page
2025-08-19 09:10:10

LLM4CMO: Large Language Model-aided Algorithm Design for Constrained Multiobjective Optimization
Zhen-Song Chen, Hong-Wei Ding, Xian-Jia Wang, Witold Pedrycz
arxiv.org/abs/2508.11871

@arXiv_physicssocph_bot@mastoxiv.page
2025-07-16 08:24:11

Universal self-similarity of hierarchical communities formed through a general self-organizing principle
Shruti Tandon (equal), Nidhi Dilip Sonwane (equal), Tobias Braun, Norbert Marwan, Juergen Kurths, R. I. Sujith
arxiv.org/abs/2507.11159

@arXiv_csCL_bot@mastoxiv.page
2025-08-18 09:08:40

Hell or High Water: Evaluating Agentic Recovery from External Failures
Andrew Wang, Sophia Hager, Adi Asija, Daniel Khashabi, Nicholas Andrews
arxiv.org/abs/2508.11027

@arXiv_csCV_bot@mastoxiv.page
2025-09-17 10:52:00

Brought a Gun to a Knife Fight: Modern VFM Baselines Outgun Specialized Detectors on In-the-Wild AI Image Detection
Yue Zhou, Xinan He, Kaiqing Lin, Bing Fan, Feng Ding, Jinhua Zeng, Bin Li
arxiv.org/abs/2509.12995

@arXiv_quantph_bot@mastoxiv.page
2025-09-04 10:09:41

Identifiability and minimality bounds of quantum and post-quantum models of classical stochastic processes
Paul M. Riechers, Thomas J. Elliott
arxiv.org/abs/2509.03004

@arXiv_csGT_bot@mastoxiv.page
2025-09-10 07:34:51

Persuading Agents in Opinion Formation Games
Martin Hoefer, Tim Koglin, Tolga Tel
arxiv.org/abs/2509.07520 arxiv.org/pdf/2509.07520

@arXiv_physicsaoph_bot@mastoxiv.page
2025-09-15 08:27:51

A Deep Learning Model of Lightning Stroke Density
Randall Jones II, Joel A. Thornton, Chris J. Wright, Robert Holzworth
arxiv.org/abs/2509.10399

@arXiv_eessSY_bot@mastoxiv.page
2025-09-16 11:40:27

Continuous-Time Distributed Learning for Collective Wisdom Maximization
Luka Bakovi\'c, Giacomo Como, Fabio Fagnani, Anton Proskurnikov, Emma Tegling
arxiv.org/abs/2509.11808

@arXiv_eessSP_bot@mastoxiv.page
2025-08-15 09:28:12

Unsupervised Deep Equilibrium Model Learning for Large-Scale Channel Estimation with Performance Guarantees
Haotian Tian, Lixiang Lian
arxiv.org/abs/2508.10546

@arXiv_csAI_bot@mastoxiv.page
2025-09-12 09:40:09

The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs
Akshit Sinha, Arvindh Arun, Shashwat Goel, Steffen Staab, Jonas Geiping
arxiv.org/abs/2509.09677

@pbloem@sigmoid.social
2025-06-26 10:56:22

After training, we finetune on real-world data. We observe that the models that have been pre-trained with noise converge very quickly compared to a baseline which is trained from scratch.
Moreover, on the other datasets, the UP models retain their zero-shot performance during finetuning. This suggests that there may be a generalization benefit to using a UP model.
All this is at the expense of much longer training, but that cost can be amortized over many tasks.

The results for the finetuning experiment. Six datasets (linux, code, dyck, wp, german and ndfa) and the performance of four models: the baseline and UP trained models and two finetuning datasets. 

The results show that the UP models converge quicker, and that they retain most of their zero-shot performance on the other datasets.
@tiotasram@kolektiva.social
2025-06-24 09:39:49

Subtooting since people in the original thread wanted it to be over, but selfishly tagging @… and @… whose opinions I value...
I think that saying "we are not a supply chain" is exactly what open-source maintainers should be doing right now in response to "open source supply chain security" threads.
I can't claim to be an expert and don't maintain any important FOSS stuff, but I do release almost all of my code under open licenses, and I do use many open source libraries, and I have felt the pain of needing to replace an unmaintained library.
There's a certain small-to-mid-scale class of program, including many open-source libraries, which can be built/maintained by a single person, and which to my mind best operate on a "snake growth" model: incremental changes/fixes, punctuated by periodic "skin-shedding" phases where make rewrites or version updates happen. These projects aren't immortal either: as the whole tech landscape around them changes, they become unnecessary and/or people lose interest, so they go unmaintained and eventually break. Each time one of their dependencies breaks (or has a skin-shedding moment) there's a higher probability that they break or shed too, as maintenance needs shoot up at these junctures. Unless you're a company trying to make money from a single long-lived app, it's actually okay that software churns like this, and if you're a company trying to make money, your priorities absolutely should not factor into any decisions people making FOSS software make: we're trying (and to a huge extent succeeding) to make a better world (and/or just have fun with our own hobbies share that fun with others) that leaves behind the corrosive & planet-destroying plague which is capitalism, and you're trying to personally enrich yourself by embracing that plague. The fact that capitalism is *evil* is not an incidental thing in this discussion.
To make an imperfect analogy, imagine that the peasants of some domain have set up a really-free-market, where they provide each other with free stuff to help each other survive, sometimes doing some barter perhaps but mostly just everyone bringing their surplus. Now imagine the lord of the domain, who is the source of these peasants' immiseration, goes to this market secretly & takes some berries, which he uses as one ingredient in delicious tarts that he then sells for profit. But then the berry-bringer stops showing up to the free market, or starts bringing a different kind of fruit, or even ends up bringing rotten berries by accident. And the lord complains "I have a supply chain problem!" Like, fuck off dude! Your problem is that you *didn't* want to build a supply chain and instead thought you would build your profit-focused business in other people's free stuff. If you were paying the berry-picker, you'd have a supply chain problem, but you weren't, so you really have an "I want more free stuff" problem when you can't be arsed to give away your own stuff for free.
There can be all sorts of problems in the really-free-market, like maybe not enough people bring socks, so the peasants who can't afford socks are going barefoot, and having foot problems, and the peasants put their heads together and see if they can convince someone to start bringing socks, and maybe they can't and things are a bit sad, but the really-free-market was never supposed to solve everyone's problems 100% when they're all still being squeezed dry by their taxes: until they are able to get free of the lord & start building a lovely anarchist society, the really-free-market is a best-effort kind of deal that aims to make things better, and sometimes will fall short. When it becomes the main way goods in society are distributed, and when the people who contribute aren't constantly drained by the feudal yoke, at that point the availability of particular goods is a real problem that needs to be solved, but at that point, it's also much easier to solve. And at *no* point does someone coming into the market to take stuff only to turn around and sell it deserve anything from the market or those contributing to it. They are not a supply chain. They're trying to help each other out, but even then they're doing so freely and without obligation. They might discuss amongst themselves how to better coordinate their mutual aid, but they're not going to end up forcing anyone to bring anything or even expecting that a certain person contribute a certain amount, since the whole point is that the thing is voluntary & free, and they've all got changing life circumstances that affect their contributions. Celebrate whatever shows up at the market, express your desire for things that would be useful, but don't impose a burden on anyone else to bring a specific thing, because otherwise it's fair for them to oppose such a burden on you, and now you two are doing your own barter thing that's outside the parameters of the really-free-market.

@arXiv_csCV_bot@mastoxiv.page
2025-09-17 07:41:09

OnlineHOI: Towards Online Human-Object Interaction Generation and Perception
Yihong Ji, Yunze Liu, Yiyao Zhuo, Weijiang Yu, Fei Ma, Joshua Huang, Fei Yu
arxiv.org/abs/2509.12250

@arXiv_mathCO_bot@mastoxiv.page
2025-09-05 09:01:51

On the $Z_q$-forcing number: computational approach and exact values
Aida Abiad, Maryam Moghaddas
arxiv.org/abs/2509.03967 arxiv.org/pdf/25…

@arXiv_astrophEP_bot@mastoxiv.page
2025-07-29 08:51:01

The Impact of Different Haze Types on the Atmosphere and Observations of Hot Jupiters: 3D Simulations of HD 189733b, HD209458b and WASP-39b
Mei Ting Mak, Denis Sergeev, Nathan Mayne, Maria Zamyatina, Maria E. Steinrueck, James Manners, Eric Hebrard, David K. Sing, Krisztian Kohary
arxiv.org/abs/2507.20366

@arXiv_csLO_bot@mastoxiv.page
2025-09-05 08:05:11

Simplicity Lies in the Eye of the Beholder: A Strategic Perspective on Controllers in Reactive Synthesis
Mickael Randour
arxiv.org/abs/2509.04129

@arXiv_csOH_bot@mastoxiv.page
2025-09-10 07:37:11

Modelling Scenarios for Carbon-aware Geographic Load Shifting of Compute Workloads
Wim Vanderbauwhede
arxiv.org/abs/2509.07043 arxiv.org/pd…

@pre@boing.world
2025-07-14 16:29:01

Tesla shareholders will apparently get to vote on whether Tesla should bail out Xai/Twitter.
Do Tesla shareholders want to give Musk more money in return for Tesla owning part of his nazi AI model and his nazi troll site?
We shall see. My guess is yes! Tesla share owners will vote to dilute themselves in return for the chance to bail out the failing Twitter and Grok.
#xai #grok #twitter #tesla

@arXiv_csCR_bot@mastoxiv.page
2025-09-16 11:34:37

From Paradigm Shift to Audit Rift: Exploring Vulnerabilities and Audit Tips for TON Smart Contracts
Yury Yanovich, Sergey Sobolev, Yash Madhwal, Kirill Ziborov, Vladimir Gorgadze, Victoria Kovalevskay, Elizaveta Smirnova, Matvey Mishuris, Subodh Sharma
arxiv.org/abs/2509.10823

@arXiv_csNI_bot@mastoxiv.page
2025-09-15 07:51:11

Taming Volatility: Stable and Private QUIC Classification with Federated Learning
Richard Jozsa, Karel Hynek, Adrian Pekar
arxiv.org/abs/2509.09997

@arXiv_csCY_bot@mastoxiv.page
2025-08-12 09:37:13

"Draw me a curator" Examining the visual stereotyping of a cultural services profession by generative AI
Dirk HR Spennemann
arxiv.org/abs/2508.07132

@rmdes@mstdn.social
2025-06-21 12:11:58

How long until the internet, which allowed a generation to benefit from a vast wealth of human knowledge, becomes a swamp filled with generated #AI pollution? It may already be too late. theregist…

@arXiv_csAI_bot@mastoxiv.page
2025-07-16 10:17:41

DrafterBench: Benchmarking Large Language Models for Tasks Automation in Civil Engineering
Yinsheng Li, Zhen Dong, Yi Shao
arxiv.org/abs/2507.11527

@arXiv_quantph_bot@mastoxiv.page
2025-08-11 09:54:29

Enhancing the Scalability of Classical Surrogates for Real-World Quantum Machine Learning Applications
Philip Anton Hernicht, Alona Sakhnenko, Corey O'Meara, Giorgio Cortiana, Jeanette Miriam Lorenz
arxiv.org/abs/2508.06131

@arXiv_eessAS_bot@mastoxiv.page
2025-09-19 09:58:01

From Who Said What to Who They Are: Modular Training-free Identity-Aware LLM Refinement of Speaker Diarization
Yu-Wen Chen, William Ho, Maxim Topaz, Julia Hirschberg, Zoran Kostic
arxiv.org/abs/2509.15082

@arXiv_csIT_bot@mastoxiv.page
2025-08-05 08:51:00

Robust Detection of Planted Subgraphs in Semi-Random Models
Dor Elimelech, Wasim Huleihel
arxiv.org/abs/2508.02158 arxiv.org/pdf/2508.02158…

@arXiv_csLG_bot@mastoxiv.page
2025-08-15 10:14:32

Conditional Information Bottleneck for Multimodal Fusion: Overcoming Shortcut Learning in Sarcasm Detection
Yihua Wang, Qi Jia, Cong Xu, Feiyu Chen, Yuhan Liu, Haotian Zhang, Liang Jin, Lu Liu, Zhichun Wang
arxiv.org/abs/2508.10644

@arXiv_mathGM_bot@mastoxiv.page
2025-09-16 10:10:26

A Type 2 Fuzzy Set Approach for Building Linear Linguistic Regression Analysis under Multi Uncertainty
Junzo Watada, Pei-Chun Lin, Bo Wang, Jeng-Shyang Pan, Jose Guadalupe Flores Muniz
arxiv.org/abs/2509.10498

@arXiv_csNE_bot@mastoxiv.page
2025-09-16 07:38:06

Deep Reinforcement Learning-Assisted Component Auto-Configuration of Differential Evolution Algorithm for Constrained Optimization: A Foundation Model
Xu Yang, Rui Wang, Kaiwen Li, Wenhua Li, Ling Wang
arxiv.org/abs/2509.11016

@arXiv_csGT_bot@mastoxiv.page
2025-08-12 09:30:13

Emergence of Cooperation and Commitment in Optional Prisoner's Dilemma
Zhao Song, The Anh Han
arxiv.org/abs/2508.06702 arxiv.org/pdf/25…

@Techmeme@techhub.social
2025-08-07 17:19:27

OpenAI says GPT-5 is its first "unified" AI model and combines the reasoning abilities of its o-series of models with the fast responses of its GPT series (Maxwell Zeff/TechCrunch)
techcrunch.com/2025/08/07/open

@arXiv_eessIV_bot@mastoxiv.page
2025-09-09 07:40:21

A Synthetic-to-Real Dehazing Method based on Domain Unification
Zhiqiang Yuan, Jinchao Zhang, Jie Zhou
arxiv.org/abs/2509.05374 arxiv.org/p…

@arXiv_hepex_bot@mastoxiv.page
2025-08-12 08:02:03

Real-Time Analysis of Unstructured Data with Machine Learning on Heterogeneous Architectures
Fotis I. Giasemis
arxiv.org/abs/2508.07423 arx…

@arXiv_csCV_bot@mastoxiv.page
2025-08-15 10:22:42

Privacy-enhancing Sclera Segmentation Benchmarking Competition: SSBC 2025
Matej Vitek, Darian Toma\v{s}evi\'c, Abhijit Das, Sabari Nathan, G\"okhan \"Ozbulak, G\"ozde Ay\c{s}e Tataro\u{g}lu \"Ozbulak, Jean-Paul Calbimonte, Andr\'e Anjos, Hariohm Hemant Bhatt, Dhruv Dhirendra Premani, Jay Chaudhari, Caiyong Wang, Jian Jiang, Chi Zhang, Qi Zhang, Iyyakutti Iyappan Ganapathi, Syed Sadaf Ali, Divya Velayudan, Maregu Assefa, Naoufel Werghi, Zachary A. Daniels, Le…

@arXiv_csRO_bot@mastoxiv.page
2025-08-11 09:37:49

Bounding Distributional Shifts in World Modeling through Novelty Detection
Eric Jing, Abdeslam Boularias
arxiv.org/abs/2508.06096 arxiv.org…

@arXiv_csSE_bot@mastoxiv.page
2025-09-03 09:49:33

Aligning Requirement for Large Language Model's Code Generation
Zhao Tian, Junjie Chen
arxiv.org/abs/2509.01313 arxiv.org/pdf/2509.0131…

@arXiv_csLG_bot@mastoxiv.page
2025-07-14 09:13:22

Physics-Informed Neural Networks with Hard Nonlinear Equality and Inequality Constraints
Ashfaq Iftakher, Rahul Golder, M. M. Faruque Hasan
arxiv.org/abs/2507.08124 arxiv.org/pdf/2507.08124 arxiv.org/html/2507.08124
arXiv:2507.08124v1 Announce Type: new
Abstract: Traditional physics-informed neural networks (PINNs) do not guarantee strict constraint satisfaction. This is problematic in engineering systems where minor violations of governing laws can significantly degrade the reliability and consistency of model predictions. In this work, we develop KKT-Hardnet, a PINN architecture that enforces both linear and nonlinear equality and inequality constraints up to machine precision. It leverages a projection onto the feasible region through solving Karush-Kuhn-Tucker (KKT) conditions of a distance minimization problem. Furthermore, we reformulate the nonlinear KKT conditions using log-exponential transformation to construct a general sparse system with only linear and exponential terms, thereby making the projection differentiable. We apply KKT-Hardnet on both test problems and a real-world chemical process simulation. Compared to multilayer perceptrons and PINNs, KKT-Hardnet achieves higher accuracy and strict constraint satisfaction. This approach allows the integration of domain knowledge into machine learning towards reliable hybrid modeling of complex systems.
toXiv_bot_toot

@arXiv_csCL_bot@mastoxiv.page
2025-09-15 09:45:01

MCP-AgentBench: Evaluating Real-World Language Agent Performance with MCP-Mediated Tools
Zikang Guo, Benfeng Xu, Chiwei Zhu, Wentao Hong, Xiaorui Wang, Zhendong Mao
arxiv.org/abs/2509.09734

@Techmeme@techhub.social
2025-08-05 14:31:01

Google DeepMind releases its Genie 3 model, which can generate 3D worlds from a prompt and has enough visual memory for a few minutes of continuous interaction (Jay Peters/The Verge)
theverge.com/news/718723/googl

@arXiv_csCV_bot@mastoxiv.page
2025-07-16 10:33:31

UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks
Peiran Wu, Yunze Liu, Zhengdong Zhu, Enmin Zhou, Shawn Shen
arxiv.org/abs/2507.11336

@arXiv_csCR_bot@mastoxiv.page
2025-07-08 12:53:10

Arbiter PUF: Uniqueness and Reliability Analysis Using Hybrid CMOS-Stanford Memristor Model
Tanvir Rahman, A. B. M. Harun-ur Rashid
arxiv.org/abs/2507.04461

@arXiv_csAI_bot@mastoxiv.page
2025-09-08 07:36:09

An Approach to Grounding AI Model Evaluations in Human-derived Criteria
Sasha Mitts
arxiv.org/abs/2509.04676 arxiv.org/pdf/2509.04676

@arXiv_csLG_bot@mastoxiv.page
2025-09-10 10:42:51

One Model for All Tasks: Leveraging Efficient World Models in Multi-Task Planning
Yuan Pu, Yazhe Niu, Jia Tang, Junyu Xiong, Shuai Hu, Hongsheng Li
arxiv.org/abs/2509.07945

@arXiv_eessIV_bot@mastoxiv.page
2025-09-12 08:06:29

Generalized User-Oriented Image Semantic Coding Empowered by Large Vision-Language Model
Sin-Yu Huang, Vincent W. S. Wong
arxiv.org/abs/2509.08913

@Techmeme@techhub.social
2025-08-03 01:36:03

A profile of Robinhood CEO Vlad Tenev, whose personal fortune has surged 6x over the past year to $6.1B, as the company leans into tokenized stock derivatives (Nina Bambysheva/Forbes)
forbes.com/sites/ninabambyshev

@arXiv_csAI_bot@mastoxiv.page
2025-07-31 07:31:41

CoEx -- Co-evolving World-model and Exploration
Minsoo Kim, Seung-won Hwang
arxiv.org/abs/2507.22281 arxiv.org/pdf/2507.22281

@arXiv_csRO_bot@mastoxiv.page
2025-06-27 09:43:59

WorldVLA: Towards Autoregressive Action World Model
Jun Cen, Chaohui Yu, Hangjie Yuan, Yuming Jiang, Siteng Huang, Jiayan Guo, Xin Li, Yibing Song, Hao Luo, Fan Wang, Deli Zhao, Hao Chen
arxiv.org/abs/2506.21539

@arXiv_csCL_bot@mastoxiv.page
2025-09-15 09:42:51

A Role-Aware Multi-Agent Framework for Financial Education Question Answering with LLMs
Andy Zhu, Yingjun Du
arxiv.org/abs/2509.09727 arxiv…

@arXiv_csAI_bot@mastoxiv.page
2025-09-05 09:55:31

World Model Implanting for Test-time Adaptation of Embodied Agents
Minjong Yoo, Jinwoo Jang, Sihyung Yoon, Honguk Woo
arxiv.org/abs/2509.03956

@arXiv_eessIV_bot@mastoxiv.page
2025-07-11 09:03:21

Label-Efficient Chest X-ray Diagnosis via Partial CLIP Adaptation
Heet Nitinkumar Dalsania
arxiv.org/abs/2507.07254 a…

@arXiv_csCV_bot@mastoxiv.page
2025-07-10 07:33:51

Unveiling the Underwater World: CLIP Perception Model-Guided Underwater Image Enhancement
Jiangzhong Cao, Zekai Zeng, Xu Zhang, Huan Zhang, Chunling Fan, Gangyi Jiang, Weisi Lin
arxiv.org/abs/2507.06234

@arXiv_csCL_bot@mastoxiv.page
2025-09-12 09:40:39

Reading Between the Lines: Classifying Resume Seniority with Large Language Models
Matan Cohen, Shira Shani, Eden Menahem, Yehudit Aperstein, Alexander Apartsin
arxiv.org/abs/2509.09229

@arXiv_csRO_bot@mastoxiv.page
2025-07-09 07:36:02

A Careful Examination of Large Behavior Models for Multitask Dexterous Manipulation
TRI LBM Team, Jose Barreiros, Andrew Beaulieu, Aditya Bhat, Rick Cory, Eric Cousineau, Hongkai Dai, Ching-Hsin Fang, Kunimatsu Hashimoto, Muhammad Zubair Irshad, Masha Itkina, Naveen Kuppuswamy, Kuan-Hui Lee, Katherine Liu, Dale McConachie, Ian McMahon, Haruki Nishimura, Calder Phillips-Grafflin, Charles Richter, Paarth Shah, Krishnan Srinivasan, Blake Wulfe, Chen Xu, Mengchao Zhang, Alex Alspach, Maya …

@arXiv_csCR_bot@mastoxiv.page
2025-07-08 11:12:31

VLAI: A RoBERTa-Based Model for Automated Vulnerability Severity Classification
C\'edric Bonhomme, Alexandre Dulaunoy
arxiv.org/abs/2507.03607

@arXiv_csAI_bot@mastoxiv.page
2025-09-08 09:39:00

LatticeWorld: A Multimodal Large Language Model-Empowered Framework for Interactive Complex World Generation
Yinglin Duan, Zhengxia Zou, Tongwei Gu, Wei Jia, Zhan Zhao, Luyi Xu, Xinzhu Liu, Hao Jiang, Kang Chen, Shuang Qiu
arxiv.org/abs/2509.05263

@arXiv_csLG_bot@mastoxiv.page
2025-09-01 09:58:12

Activation Subspaces for Out-of-Distribution Detection
Bar{\i}\c{s} Z\"ong\"ur, Robin Hesse, Stefan Roth
arxiv.org/abs/2508.21695

@arXiv_csRO_bot@mastoxiv.page
2025-09-11 10:04:03

Augmenting Neural Networks-based Model Approximators in Robotic Force-tracking Tasks
Kevin Saad, Vincenzo Petrone, Enrico Ferrentino, Pasquale Chiacchio, Francesco Braghin, Loris Roveda
arxiv.org/abs/2509.08440

@arXiv_csCV_bot@mastoxiv.page
2025-08-06 10:44:20

OmniShape: Zero-Shot Multi-Hypothesis Shape and Pose Estimation in the Real World
Katherine Liu, Sergey Zakharov, Dian Chen, Takuya Ikeda, Greg Shakhnarovich, Adrien Gaidon, Rares Ambrus
arxiv.org/abs/2508.03669

@arXiv_csLG_bot@mastoxiv.page
2025-09-05 10:28:01

When three experiments are better than two: Avoiding intractable correlated aleatoric uncertainty by leveraging a novel bias--variance tradeoff
Paul Scherer, Andreas Kirsch, Jake P. Taylor-King
arxiv.org/abs/2509.04363

@arXiv_csRO_bot@mastoxiv.page
2025-09-08 08:28:34

Hierarchical Reduced-Order Model Predictive Control for Robust Locomotion on Humanoid Robots
Adrian B. Ghansah, Sergio A. Esteban, Aaron D. Ames
arxiv.org/abs/2509.04722

@arXiv_csCV_bot@mastoxiv.page
2025-09-12 10:15:19

Measuring Epistemic Humility in Multimodal Large Language Models
Bingkui Tong, Jiaer Xia, Sifeng Shang, Kaiyang Zhou
arxiv.org/abs/2509.09658

@arXiv_csCL_bot@mastoxiv.page
2025-08-07 10:23:14

Unveiling the Landscape of Clinical Depression Assessment: From Behavioral Signatures to Psychiatric Reasoning
Zhuang Chen, Guanqun Bi, Wen Zhang, Jiawei Hu, Aoyun Wang, Xiyao Xiao, Kun Feng, Minlie Huang
arxiv.org/abs/2508.04531

@arXiv_csCV_bot@mastoxiv.page
2025-09-12 10:10:29

Texture-aware Intrinsic Image Decomposition with Model- and Learning-based Priors
Xiaodong Wang, Zijun He, Xin Yuan
arxiv.org/abs/2509.09352

@arXiv_csCV_bot@mastoxiv.page
2025-07-28 10:15:31

Back to the Features: DINO as a Foundation for Video World Models
Federico Baldassarre, Marc Szafraniec, Basile Terver, Vasil Khalidov, Francisco Massa, Yann LeCun, Patrick Labatut, Maximilian Seitzer, Piotr Bojanowski
arxiv.org/abs/2507.19468

@arXiv_csCV_bot@mastoxiv.page
2025-09-05 10:16:01

TriLiteNet: Lightweight Model for Multi-Task Visual Perception
Quang-Huy Che, Duc-Khai Lam
arxiv.org/abs/2509.04092 arxiv.org/pdf/2509.0409…