Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csAI_bot@mastoxiv.page
2025-08-15 09:37:12

Diversity First, Quality Later: A Two-Stage Assumption for Language Model Alignment
Zetian Sun, Dongfang Li, Baotian Hu
arxiv.org/abs/2508.10530

@arXiv_eessSP_bot@mastoxiv.page
2025-10-15 09:20:21

Moment-based Posterior Sampling for Multi-reference Alignment
Axel Janson, Joakim And\'en
arxiv.org/abs/2510.12651 arxiv.org/pdf/2510.1…

@arXiv_csIR_bot@mastoxiv.page
2025-07-16 08:33:51

Aligned Query Expansion: Efficient Query Expansion for Information Retrieval through LLM Alignment
Adam Yang, Gustavo Penha, Enrico Palumbo, Hugues Bouchard
arxiv.org/abs/2507.11042

@arXiv_csLG_bot@mastoxiv.page
2025-09-12 10:09:29

Graph Alignment via Dual-Pass Spectral Encoding and Latent Space Communication
Maysam Behmanesh, Erkan Turan, Maks Ovsjanikov
arxiv.org/abs/2509.09597

@arXiv_csAI_bot@mastoxiv.page
2025-10-15 09:37:41

Precise Attribute Intensity Control in Large Language Models via Targeted Representation Editing
Rongzhi Zhang, Liqin Ye, Yuzhao Heng, Xiang Chen, Tong Yu, Lingkai Kong, Sudheer Chava, Chao Zhang
arxiv.org/abs/2510.12121

@arXiv_csCY_bot@mastoxiv.page
2025-08-12 08:41:33

Towards Integrated Alignment
Ben Y. Reis, William La Cava
arxiv.org/abs/2508.06592 arxiv.org/pdf/2508.06592

@arXiv_csCV_bot@mastoxiv.page
2025-10-13 10:37:50

FLOWING: Implicit Neural Flows for Structure-Preserving Morphing
Arthur Bizzi, Matias Grynberg, Vitor Matias, Daniel Perazzo, Jo\~ao Paulo Lima, Luiz Velho, Nuno Gon\c{c}alves, Jo\~ao Pereira, Guilherme Schardong, Tiago Novello
arxiv.org/abs/2510.09537

@arXiv_csIR_bot@mastoxiv.page
2025-08-15 09:18:22

DAS: Dual-Aligned Semantic IDs Empowered Industrial Recommender System
Wencai Ye, Mingjie Sun, Shaoyun Shi, Peng Wang, Wenjin Wu, Peng Jiang
arxiv.org/abs/2508.10584

@pavelasamsonov@mastodon.social
2025-09-08 14:37:01

You ask your roommate to buy toilet paper. They show you the receipt as proof. The next morning, when you need toilet paper, the drawer is actually empty. This is because they used an innovative new method called Lean Shopping, where instead of buying the things they just print out a receipt — saving time and money.
This is a story about the social nature of problem framing, and when "high velocity" becomes less productive.

@arXiv_csCL_bot@mastoxiv.page
2025-10-07 12:13:22

Do LLMs Align with My Task? Evaluating Text-to-SQL via Dataset Alignment
Davood Rafiei, Morgan Lindsay Heisler, Weiwei Zhang, Mohammadreza Pourreza, Yong Zhang
arxiv.org/abs/2510.04919

@arXiv_statML_bot@mastoxiv.page
2025-09-08 07:59:19

Any-Step Density Ratio Estimation via Interval-Annealed Secant Alignment
Wei Chen, Shigui Li, Jiacheng Li, Jian Xu, Zhiqi Lin, Junmei Yang, Delu Zeng, John Paisley, Qibin Zhao
arxiv.org/abs/2509.04852

@arXiv_quantph_bot@mastoxiv.page
2025-09-08 09:58:10

Exploring an implementation of quantum learning pipeline for support vector machines
Mario Bifulco, Luca Roversi
arxiv.org/abs/2509.04983 a…

@arXiv_csAI_bot@mastoxiv.page
2025-09-10 09:55:01

Getting In Contract with Large Language Models -- An Agency Theory Perspective On Large Language Model Alignment
Sascha Kaltenpoth, Oliver M\"uller
arxiv.org/abs/2509.07642

@arXiv_mathCT_bot@mastoxiv.page
2025-09-09 08:35:32

Categorical Tiling Theory: Constructing Directed Planar Tilings via Edge Reversal
Catherine DiLeo, Preston Sessoms, Brandon T. Shapiro
arxiv.org/abs/2509.06363

@arXiv_csLG_bot@mastoxiv.page
2025-10-08 10:48:49

Primal-Dual Direct Preference Optimization for Constrained LLM Alignment
Yihan Du, Seo Taek Kong, R. Srikant
arxiv.org/abs/2510.05703 arxiv…

@arXiv_mathOC_bot@mastoxiv.page
2025-08-25 08:31:40

A unified vertical alignment and earthwork model in road design with a new convex optimization model for road networks
Sayan Sadhukhan, Warren Hare, Yves Lucet
arxiv.org/abs/2508.15953

@arXiv_mathAG_bot@mastoxiv.page
2025-08-28 08:00:10

AG codes from the Hermitian curve for Cross-Subspace Alignment in Private Information Retrieval
Francesco Ghiandoni, Massimo Giulietti, Enrico Mezzano, Marco Timpanella
arxiv.org/abs/2508.19459

@arXiv_csCL_bot@mastoxiv.page
2025-08-07 10:28:54

FaST: Feature-aware Sampling and Tuning for Personalized Preference Alignment with Limited Data
Thibaut Thonet, Germ\'an Kruszewski, Jos Rozen, Pierre Erbacher, Marc Dymetman
arxiv.org/abs/2508.04698

@arXiv_csCY_bot@mastoxiv.page
2025-07-29 09:15:41

Justifications for Democratizing AI Alignment and Their Prospects
Andr\'e Steingr\"uber, Kevin Baum
arxiv.org/abs/2507.19548 arxiv…

@arXiv_csLG_bot@mastoxiv.page
2025-10-06 10:24:39

Bootstrap Learning for Combinatorial Graph Alignment with Sequential GNNs
Marc Lelarge
arxiv.org/abs/2510.03086 arxiv.org/pdf/2510.03086

@tiotasram@kolektiva.social
2025-07-30 18:26:14

A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI

@arXiv_csAI_bot@mastoxiv.page
2025-10-08 10:30:29

Refusal Falls off a Cliff: How Safety Alignment Fails in Reasoning?
Qingyu Yin, Chak Tou Leong, Linyi Yang, Wenxuan Huang, Wenjie Li, Xiting Wang, Jaehong Yoon, YunXing, XingYu, Jinjin Gu
arxiv.org/abs/2510.06036

@arXiv_csRO_bot@mastoxiv.page
2025-09-18 10:04:51

Pre-Manipulation Alignment Prediction with Parallel Deep State-Space and Transformer Models
Motonari Kambara, Komei Sugiura
arxiv.org/abs/2509.13839

@arXiv_quantph_bot@mastoxiv.page
2025-10-08 10:10:09

A New Quantum Linear System Algorithm Beyond the Condition Number and Its Application to Solving Multivariate Polynomial Systems
Jianqiang Li
arxiv.org/abs/2510.05588

@arXiv_csLG_bot@mastoxiv.page
2025-07-31 09:44:31

RANA: Robust Active Learning for Noisy Network Alignment
Yixuan Nan, Xixun Lin, Yanmin Shang, Zhuofan Li, Can Zhao, Yanan Cao
arxiv.org/abs/2507.22434

@arXiv_csAI_bot@mastoxiv.page
2025-09-03 13:55:43

EigenBench: A Comparative Behavioral Measure of Value Alignment
Jonathn Chang, Leonard Piff, Suvadip Sana, Jasmine X. Li, Lionel Levine
arxiv.org/abs/2509.01938

@arXiv_csSE_bot@mastoxiv.page
2025-09-25 09:53:12

The Cream Rises to the Top: Efficient Reranking Method for Verilog Code Generation
Guang Yang, Wei Zheng, Xiang Chen, Yifan Sun, Fengji Zhang, Terry Yue Zhuo
arxiv.org/abs/2509.20215

@arXiv_csCY_bot@mastoxiv.page
2025-10-09 07:33:30

LLM-Driven Rubric-Based Assessment of Algebraic Competence in Multi-Stage Block Coding Tasks with Design and Field Evaluation
Yong Oh Lee, Byeonghun Bang, Sejun Oh
arxiv.org/abs/2510.06253

@arXiv_csCL_bot@mastoxiv.page
2025-10-06 10:17:09

XTRA: Cross-Lingual Topic Modeling with Topic and Representation Alignments
Tien Phat Nguyen, Vu Minh Ngo, Tung Nguyen, Linh Van Ngo, Duc Anh Nguyen, Sang Dinh, Trung Le
arxiv.org/abs/2510.02788

@arXiv_csCV_bot@mastoxiv.page
2025-10-01 11:53:37

TTT3R: 3D Reconstruction as Test-Time Training
Xingyu Chen, Yue Chen, Yuliang Xiu, Andreas Geiger, Anpei Chen
arxiv.org/abs/2509.26645 arxi…

@arXiv_econTH_bot@mastoxiv.page
2025-09-19 07:39:31

Friend or Foe: Delegating to an AI Whose Alignment is Unknown
Drew Fudenberg, Annie Liang
arxiv.org/abs/2509.14396 arxiv.org/pdf/2509.14396…

@arXiv_csCY_bot@mastoxiv.page
2025-09-30 10:21:31

Open Opportunities in AI Safety, Alignment, and Ethics (AI SAE)
Dylan Waldner
arxiv.org/abs/2509.24065 arxiv.org/pdf/2509.24065

@arXiv_csMA_bot@mastoxiv.page
2025-08-27 07:44:32

Skill-Aligned Fairness in Multi-Agent Learning for Collaboration in Healthcare
Promise Osaine Ekpo, Brian La, Thomas Wiener, Saesha Agarwal, Arshia Agrawal, Gonzalo Gonzalez-Pumariega, Lekan P. Molu, Angelique Taylor
arxiv.org/abs/2508.18708

@arXiv_csRO_bot@mastoxiv.page
2025-07-29 11:21:11

Uni-Mapper: Unified Mapping Framework for Multi-modal LiDARs in Complex and Dynamic Environments
Gilhwan Kang, Hogyun Kim, Byunghee Choi, Seokhwan Jeong, Young-Sik Shin, Younggun Cho
arxiv.org/abs/2507.20538

@arXiv_csCV_bot@mastoxiv.page
2025-08-22 10:10:11

Aligning Moments in Time using Video Queries
Yogesh Kumar, Uday Agarwal, Manish Gupta, Anand Mishra
arxiv.org/abs/2508.15439 arxiv.org/pdf/…

@arXiv_csAI_bot@mastoxiv.page
2025-09-30 13:35:11

UniAPL: A Unified Adversarial Preference Learning Framework for Instruct-Following
FaQiang Qian, WeiKun Zhang, Ziliang Wang, Kang An, Xuhui Zheng, Liangjian Wen, Mengya Gao, Yong Dai, Yichao Wu
arxiv.org/abs/2509.25148

@arXiv_csIR_bot@mastoxiv.page
2025-09-25 08:35:22

Multimodal-enhanced Federated Recommendation: A Group-wise Fusion Approach
Chunxu Zhang, Weipeng Zhang, Guodong Long, Zhiheng Xue, Riting Xia, Bo Yang
arxiv.org/abs/2509.19955

@arXiv_eessSP_bot@mastoxiv.page
2025-08-18 07:55:40

Near-Field Variable-Width Beam Coverage and Codebook Design for XL-RIS
Yida Zhang, Qiuyan Liu, Qiang Wang, Hongtao Luo, Yuqi Xia
arxiv.org/abs/2508.11178

@arXiv_csAI_bot@mastoxiv.page
2025-09-19 09:56:41

Internalizing Self-Consistency in Language Models: Multi-Agent Consensus Alignment
Ankur Samanta, Akshayaa Magesh, Youliang Yu, Runzhe Wu, Ayush Jain, Daniel Jiang, Boris Vidolov, Paul Sajda, Yonathan Efroni, Kaveh Hassani
arxiv.org/abs/2509.15172

@arXiv_csCL_bot@mastoxiv.page
2025-08-18 09:44:40

Language models align with brain regions that represent concepts across modalities
Maria Ryskina, Greta Tuckute, Alexander Fung, Ashley Malkin, Evelina Fedorenko
arxiv.org/abs/2508.11536

@arXiv_csAI_bot@mastoxiv.page
2025-08-19 09:49:50

Overcoming Knowledge Discrepancies: Structuring Reasoning Threads through Knowledge Balancing in Interactive Scenarios
Daniel Burkhardt, Xiangwei Cheng
arxiv.org/abs/2508.12100