Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_physicsgenph_bot@mastoxiv.page
2025-10-10 08:01:08

Holographic connection of f(G) gravity through Barrow and a generalized version of holographic dark fluid
Surajit Chattopadhyay
arxiv.org/abs/2510.07335

@arXiv_csCV_bot@mastoxiv.page
2025-09-09 12:27:02

MRI-Based Brain Tumor Detection through an Explainable EfficientNetV2 and MLP-Mixer-Attention Architecture
Mustafa Yurdakul, \c{S}akir Ta\c{s}demir
arxiv.org/abs/2509.06713

@arXiv_csAI_bot@mastoxiv.page
2025-09-08 11:19:04

Crosslisted article(s) found for cs.AI. arxiv.org/list/cs.AI/new
[3/6]:
- In-Context Policy Adaptation via Cross-Domain Skill Diffusion
Minjong Yoo, Woo Kyung Kim, Honguk Woo

@arXiv_csCL_bot@mastoxiv.page
2025-10-07 20:18:03

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[3/8]:
- TACO: Enhancing Multimodal In-context Learning via Task Mapping-Guided Sequence Configuration
Yanshu Li, Jianjiang Yang, Tian Yun, Pinyuan Feng, Jinfa Huang, Ruixiang Tang

@michabbb@social.vivaldi.net
2025-07-30 19:53:57

💰 Supports multiple dimensions and quantization options - binary 512d version outperforms OpenAI-v3-large while reducing vector database costs by 99.48%
🔍 Processes entire documents in single pass to generate chunk embeddings enriched with document-level context
🎯 Less sensitive to chunking strategies compared to traditional context-agnostic embedding models

@arXiv_mathDG_bot@mastoxiv.page
2025-07-31 08:56:11

Hypersurfaces of six-dimensional nearly K\"ahler manifolds
Mateo Anarella, Marie D'haene
arxiv.org/abs/2507.22526 arxiv.org/pdf/25…

@arXiv_csCL_bot@mastoxiv.page
2025-08-08 10:04:32

H-Net : Hierarchical Dynamic Chunking for Tokenizer-Free Language Modelling in Morphologically-Rich Languages
Mehrdad Zakershahrak, Samira Ghodratnama
arxiv.org/abs/2508.05628

@vrandecic@mas.to
2025-09-01 08:25:10

If you think there might be an AI bubble, but don't worry, even if it pops, how bad can it be?
Here's one number: seven tech companies (NVIDIA, Microsoft, Google, Apple, Meta, Tesla, Amazon) are worth ~20 trillion, ⇓ of total US stock market
For context: total subprime mortgages in 2008 were 1.3 trillion, total US mortgage debt 10 trillion
You have a 401k or index fonds? Your money is likely tied with the AI market. If you want that, that's fine. If you're …

@arXiv_hepth_bot@mastoxiv.page
2025-07-25 10:01:32

Open strings in type IIB AdS$_3$ flux vacua
\'Alvaro Arboleya, Adolfo Guarino, Matteo Morittu, Giuseppe Sudano
arxiv.org/abs/2507.18529

@arXiv_hepph_bot@mastoxiv.page
2025-08-18 09:28:20

The Masses of Fermions in the context of the Supersymmetric $SU(3)_{C}\times SU(3)_{L}\times U(1)_{N}$ Model
M. C. Rodriguez
arxiv.org/abs/2508.11456

@rperezrosario@mastodon.social
2025-07-19 01:09:31

Software Engineer Will Larson unpacks a lot in this July 2025 post. Key takeaway use cases of agentic AI include:
1. Using an LLM to evaluate a context window and get a result.
2. Using an LLM to suggest tools relevant to the context window, then enrich it with the tool’s response.
3. Managing flow control for tool usage.
4. Doing anything software can do to build better context windows to pass on to LLMs.
"What can agents actually do?"

@arXiv_csHC_bot@mastoxiv.page
2025-09-04 09:00:01

Finding My Way: Influence of Different Audio Augmented Reality Navigation Cues on User Experience and Subjective Usefulness
Sina Hinzmann, Francesco Vona, Juliane Henning, Mohamed Amer, Omar Abdellatif, Tanja Kojic, Jan-Niklas Voigt-Antons
arxiv.org/abs/2509.03199

@arXiv_mathRT_bot@mastoxiv.page
2025-09-03 10:39:43

Combinatorics of monoidal actions in Lie-algebraic context
Volodymyr Mazorchuk, Xiaoyu Zhu
arxiv.org/abs/2509.01404 arxiv.org/pdf/2509.0140…

@arXiv_nuclth_bot@mastoxiv.page
2025-10-02 09:21:50

Next highest weight and other lower $SU(3)$ irreducible representations with proxy-$SU(4)$ symmetry for nuclei with $32 \le \mbox{Z,N} \le 46$
V. K. B. Kota
arxiv.org/abs/2510.00800

@arXiv_astrophEP_bot@mastoxiv.page
2025-09-05 09:02:51

A JWST/MIRI view of k Andromedae b: Refining its mass, age, and physical parameters
N. Godoy, E. Choquet, E. Serabyn, M. Malin, P. Tremblin, C. Danielski, P. O. Lagage, A. Boccaletti, B. Charnay, M. E. Ressler
arxiv.org/abs/2509.03624

@Erikmitk@mastodon.gamedev.place
2025-08-26 08:05:11

Am ***I*** a needle in somebody's haystack!?
arxiv.org/html/2502.05167v3

Screenshot of the paper “NoLiMa: Long-Context Evaluation Beyond Literal Matching”

The text is too long for the description but it shows Section 3 of the paper. The example given for finding a a relevant piece of information “the needle” in a very long context “the haystack” is a person living in Dresden. Since I also live in Dresden I found that remarkable!
@arXiv_hepex_bot@mastoxiv.page
2025-10-02 09:38:10

Limiting the Parameter Space for Unstable eV-scale Neutrinos Using IceCube Data
Abbasi, Ackermann, Adams, Agarwalla, Aguilar, Ahlers, Alameddine, Ali, Amin, Andeen, Arg\"uelles, Ashida, Athanasiadou, Axani, Babu, Bai, Baines-Holmes, V., Barwick, Bash, Basu, Bay, Beatty, Tjus, Behrens, Beise, Bellenghi, Benkel, BenZvi, Berley, Bernardini, Besson, Blaufuss, Bloom, Blot, Bodo, Bontempo, Motzkin, Meneguolo, B\"oser, Botner, B\"ottcher, Braun, Brinson, Brisson-Tsavoussis, Bur…

@arXiv_csLG_bot@mastoxiv.page
2025-07-31 13:34:56

Replaced article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[3/4]:
- Provable Low-Frequency Bias of In-Context Learning of Representations
Yongyi Yang, Hidenori Tanaka, Wei Hu

@arXiv_csCV_bot@mastoxiv.page
2025-08-29 10:31:21

${C}^{3}$-GS: Learning Context-aware, Cross-dimension, Cross-scale Feature for Generalizable Gaussian Splatting
Yuxi Hu, Jun Zhang, Kuangyi Chen, Zhe Zhang, Friedrich Fraundorfer
arxiv.org/abs/2508.20754

@arXiv_astrophHE_bot@mastoxiv.page
2025-09-03 10:15:03

Studying the black widow pulsars PSR J0312$-$0921 and PSR J1627$ $3219 in the optical and X-rays
A. V. Bobakov, A. Kirichenko, S. V. Zharikov, D. A. Zyuzin, A. V. Karpova, Yu. A. Shibanov, T. Begari
arxiv.org/abs/2509.01488

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2025-07-18 09:22:12

Suppression of Thermal Conductivity via Singlet-Dominated Scattering in TmFeO$_3$
M. L. McLanahan, D. Lederman, A. P. Ramirez
arxiv.org/abs/2507.12608

@michabbb@social.vivaldi.net
2025-07-30 19:53:57

#VoyageAI introduces voyage-context-3, a contextualized chunk #embedding #llm that captures both chunk details and full document context 🔍

@arXiv_csCR_bot@mastoxiv.page
2025-09-19 07:38:11

Early Approaches to Adversarial Fine-Tuning for Prompt Injection Defense: A 2022 Study of GPT-3 and Contemporary Models
Gustavo Sandoval, Denys Fenchenko, Junyao Chen
arxiv.org/abs/2509.14271

@arXiv_csCL_bot@mastoxiv.page
2025-08-06 14:26:50

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[3/4]:
- LaMPE: Length-aware Multi-grained Positional Encoding for Adaptive Long-context Scaling Without T...
Sikui Zhang, Guangze Gao, Ziyun Gan, Chunfeng Yuan, Zefeng Lin, Houwen Peng, Bing Li, Weiming Hu

@grahamperrin@bsd.cafe
2025-09-22 08:32:42

@… thanks, I wonder whether Ventoy is compatible with 15.0.
reddit.com/r/freebsd/comments/

@arXiv_grqc_bot@mastoxiv.page
2025-07-30 09:28:21

Lunar Reference Timescale
Adrien Bourgoin (LTE, Observatoire de Paris, Universit\'e PSL, CNRS, Sorbonne Universit\'e, LNE, Paris, France), Pascale Defraigne (Royal Observatory of Belgium, Brussels, Belgium), Fr\'ed\'eric Meynadier (Time Department, BIPM, Pavillon de Breteuil, S\`evres, France)
arxiv.org/abs/2507.215…

@unchartedworlds@scicomm.xyz
2025-07-17 19:39:08
Content warning: covid in the UK - stats & map

Useful thread from @… with latest UK covid positivity results (from last week). Going up a bit - more noticeably in certain areas.
For context, "positivity rate" isn't "how many people have covid overall": it's "when we bothered actually doing tests, how many of the tests came back positive".
So for example if you test 30 people, and 3 of the tests came back positive, that's a "10% positivity rate".
Sometimes there are blank places on the map where, if there _was_ any testing that week, they didn't bother sending it in.
#covid #UK #stats

@arXiv_astrophSR_bot@mastoxiv.page
2025-08-26 10:19:17

The Impact of Axion-Like Particles on Late Stellar Evolution From Intermediate-Mass Stars to core-collapse Supernova Progenitors
Inmacolata Dom\'inguez, Oscar Straniero, Luciano Piersanti, Maurizio Giannotti, Alessandro Mirizzi
arxiv.org/abs/2508.17779

@arXiv_nlincd_bot@mastoxiv.page
2025-09-22 12:37:33

Replaced article(s) found for nlin.CD. arxiv.org/list/nlin.CD/new
[1/1]:
- Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machin...
Yuanzhao Zhang, William Gilpin

@arXiv_csAR_bot@mastoxiv.page
2025-08-26 07:31:46

GPT-OSS-20B: A Comprehensive Deployment-Centric Analysis of OpenAI's Open-Weight Mixture of Experts Model
Deepak Kumar, Divakar Yadav, Yash Patel
arxiv.org/abs/2508.16700

@arXiv_csCL_bot@mastoxiv.page
2025-09-05 13:02:56

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[2/3]:
- Context Reasoner: Incentivizing Reasoning Capability for Contextualized Privacy and Safety Compli...
Hu, Li, Jing, Hu, Zeng, Han, Xu, Chu, Hu, Song

@arXiv_csAI_bot@mastoxiv.page
2025-08-29 13:04:42

Replaced article(s) found for cs.AI. arxiv.org/list/cs.AI/new
[3/4]:
- Dynamic Context Compression for Efficient RAG
Shuyu Guo, Zhaochun Ren

@arXiv_condmatquantgas_bot@mastoxiv.page
2025-07-29 08:17:31

Low-energy atomic scattering: s-wave relation between the interaction potential and the phase shift
Francesco Lorenzi, Luca Salansich
arxiv.org/abs/2507.20421

@arXiv_mathNT_bot@mastoxiv.page
2025-08-19 09:57:30

On the analytic rank of the twin prime elliptic curve $y^2=x(x-2)(x-p)$
Kirti Joshi
arxiv.org/abs/2508.12340 arxiv.org/pdf/2508.12340

@arXiv_mathCO_bot@mastoxiv.page
2025-09-16 10:56:16

Restricted Jacobi permutations
Alyssa G. Henke, Kyle R. Hoffman, Derek H. Stephens, Yongwei Yuan, Yan Zhuang
arxiv.org/abs/2509.11494 arxiv…

@arXiv_qbiobm_bot@mastoxiv.page
2025-07-14 08:15:42

Unavailability of experimental 3D structural data on protein folding dynamics and necessity for a new generation of structure prediction methods in this context
Aydin Wells, Khalique Newaz, Jennifer Morones, Jianlin Cheng, Tijana Milenkovi\'c
arxiv.org/abs/2507.08188

@arXiv_astrophIM_bot@mastoxiv.page
2025-07-22 09:55:30

Electron impact ro-vibrational transitions and dissociative recombination of H2 and HD : Rate coefficients and astrophysical implications
Riyad Hassaine, Emerance Djuissi, Nicolina Pop, Felix Iacob, Michel D. Ep\'ee Ep\'ee, Ousmanou Motapon, Vincenzo Laporta, Razvan Bogdan, Mehdi Ayouz, Mourad Telmini, Carla M. Coppola, Daniele Galli, Janos Zs. Mezei, Ioan F. Schneider

@arXiv_mathRT_bot@mastoxiv.page
2025-09-01 07:56:03

On the boundary Carrollian conformal algebra
Lucas Buzaglo, Xiao He, Tuan Anh Pham, Haijun Tan, Girish S Vishwa, Kaiming Zhao
arxiv.org/abs/2508.21603

@arXiv_csCR_bot@mastoxiv.page
2025-08-27 09:55:42

LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres
Ronal Singh, Shahroz Tariq, Fatemeh Jalalvand, Mohan Baruwal Chhetri, Surya Nepal, Cecile Paris, Martin Lochner
arxiv.org/abs/2508.18947

@arXiv_mathCV_bot@mastoxiv.page
2025-07-23 08:53:22

Rigidity of proper holomorphic self-mappings of the hexablock
Enchao Bi, Zeinab Shaaban, Guicong Su
arxiv.org/abs/2507.16176

@arXiv_astrophGA_bot@mastoxiv.page
2025-07-22 09:33:50

Dwarf-Dwarf interactions and their influence on star formation : Insights from post-merger galaxies
Rakshit Chauhan, Smitha Subramanian, Deepak A. Kudari, S. Amrutha, Mousumi Das
arxiv.org/abs/2507.14695

@arXiv_mathGR_bot@mastoxiv.page
2025-07-18 08:39:02

On finite extensions of lamplighter groups
Corentin Bodart
arxiv.org/abs/2507.13203 arxiv.org/pdf/2507.13203

@arXiv_csCY_bot@mastoxiv.page
2025-09-12 08:33:29

Deep opacity and AI: A threat to XAI and to privacy protection mechanisms
Vincent C. M\"uller
arxiv.org/abs/2509.08835 arxiv.org/pdf/2…

@michabbb@social.vivaldi.net
2025-08-13 09:46:40

🤖 Context-Aware Agents
Build agents maintaining context across hundreds of tool calls and multi-step workflows with complete #API documentation, tool definitions, and interaction histories without losing coherence.
💰 Pricing Structure
Standard rates: Prompts ≤200K tokens ($3/MTok input, $15/MTok output)
Extended context: Prompts >200K tokens ($6/MTok input, $22.50/MTok out…

@arXiv_astrophCO_bot@mastoxiv.page
2025-09-15 08:32:21

Reionization in HESTIA: Studying reionization in the LG through zoom simulations
David Attard, Luke Conaboy, Noam Libeskind, Sergey Pillipenko, Keri Dixon, Ilian T. Iliev
arxiv.org/abs/2509.10133

@arXiv_csCV_bot@mastoxiv.page
2025-07-16 10:37:01

CATVis: Context-Aware Thought Visualization
Tariq Mehmood, Hamza Ahmad, Muhammad Haroon Shakeel, Murtaza Taj
arxiv.org/abs/2507.11522

@arXiv_qbioQM_bot@mastoxiv.page
2025-08-13 09:02:32

An Interactive Platform for Unified Assessment of Drug-Drug Interactions Using Descriptive and Pharmacokinetic Data
Nadezhda Diadkina
arxiv.org/abs/2508.08351

@arXiv_csCL_bot@mastoxiv.page
2025-09-01 07:39:42

Mapping Toxic Comments Across Demographics: A Dataset from German Public Broadcasting
Jan Fillies, Michael Peter Hoffmann, Rebecca Reichel, Roman Salzwedel, Sven Bodemer, Adrian Paschke
arxiv.org/abs/2508.21084

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@arXiv_csAI_bot@mastoxiv.page
2025-09-22 14:05:32

Replaced article(s) found for cs.AI. arxiv.org/list/cs.AI/new
[3/6]:
- A Layered Multi-Expert Framework for Long-Context Mental Health Assessments
Jinwen Tang, Qiming Guo, Wenbo Sun, Yi Shang

@arXiv_grqc_bot@mastoxiv.page
2025-09-22 09:24:21

Theory space and stability analysis of General Relativistic cosmological solutions in modified gravity
Saikat Chakraborty, Piyabut Burikham
arxiv.org/abs/2509.15762

@arXiv_csCL_bot@mastoxiv.page
2025-07-31 13:22:08

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[2/3]:
- MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language Models
Zhongzhan Huang, Guoming Ling, Shanshan Zhong, Hefeng Wu, Liang Lin

@arXiv_astrophEP_bot@mastoxiv.page
2025-07-21 09:02:50

XUE 10. The CO$_2$-rich terrestrial planet-forming region of an externally irradiated Herbig disk
Jenny Frediani, Arjan Bik, Mar\'ia Claudia Ram\'irez-Tannus, Rens Waters, Konstantin V. Getman, Eric D. Feigelson, Bayron Portilla-Revelo, Beno\^it Tabone, Thomas J. Haworth, Andrew Winter, Thomas Henning, Giulia Perotti, Alexis Brandeker, Germ\'an Chaparro, Pablo Cuartas-Restrepo, Sebasti\'an Hern\'andez, Michael A. Kuhn, Thomas Preibisch, Veronica Roccatagliata, Sierk…

@arXiv_mathCV_bot@mastoxiv.page
2025-08-18 07:46:30

The higher spin $\Pi$-operator in Clifford analysis
Wanqing Cheng, Chao Ding
arxiv.org/abs/2508.11271 arxiv.org/pdf/2508.11271

@arXiv_csAI_bot@mastoxiv.page
2025-09-18 08:08:41

Agentic UAVs: LLM-Driven Autonomy with Integrated Tool-Calling and Cognitive Reasoning
Anis Koubaa, Khaled Gabr
arxiv.org/abs/2509.13352 ar…

@arXiv_astrophGA_bot@mastoxiv.page
2025-08-14 09:07:32

The Interstellar Medium in IZw18 seen with JWST/MIRI: I. Highly Ionized Gas
L. K. Hunt, A. Aloisi, M. G. Navarro, R. J. Rickards Vaught, B. T. Draine, A. Adamo, F. Annibali, D. Calzetti, S. Hernandez, B. L. James, M. Mingozzi, R. Schneider, M. Tosi, B. Brandl, M. G. del Valle-Espinosa, F. Donnan, A. S. Hirschauer, M. Meixner, D. Rigopoulou, C. T. Richardson, J. M. Levanti, A. R. Basu-Zych

@unixorn@hachyderm.io
2025-08-16 12:47:22

OH on slack
User Story
As Overloaded Olivia, the backend engineer,
I want clear, actionable requirements with business context, so that I can implement the correct solution without burning half a day in meetings or wild guessing.
Acceptance Criteria
1. The story includes functional requirements (not just vibes).
2. Success/failure states are defined happy path edge cases).
3. Any dependencies or blockers are identified.
Bonus: PM/Designer reviewed this and it’s not just a draft in disguise.
#swe #sre #devops #devoops @… @…

@arXiv_grqc_bot@mastoxiv.page
2025-09-18 09:45:41

Local existence theory for a class of CMC gauges for the Einstein-non-linear scalar field equations
Hans Ringstr\"om
arxiv.org/abs/2509.14110

@arXiv_csCL_bot@mastoxiv.page
2025-09-26 14:12:41

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[3/5]:
- ILRe: Intermediate Layer Retrieval for Context Compression in Causal Language Models
Manlai Liang, Mandi Liu, Jiangzhou Ji, Huaijun Li, Haobo Yang, Yaohan He, Jinlong Li

@arXiv_csCL_bot@mastoxiv.page
2025-09-17 10:37:50

The Few-shot Dilemma: Over-prompting Large Language Models
Yongjian Tang, Doruk Tuncel, Christian Koerner, Thomas Runkler
arxiv.org/abs/2509.13196

@arXiv_csCL_bot@mastoxiv.page
2025-08-18 09:41:20

HumorPlanSearch: Structured Planning and HuCoT for Contextual AI Humor
Shivam Dubey
arxiv.org/abs/2508.11429 arxiv.org/pdf/2508.11429

@arXiv_csCL_bot@mastoxiv.page
2025-09-23 20:04:47

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[3/10]:
- Context-aware Biases for Length Extrapolation
Ali Veisi, Hamidreza Amirzadeh, Amir Mansourian