
2025-10-10 08:01:08
Holographic connection of f(G) gravity through Barrow and a generalized version of holographic dark fluid
Surajit Chattopadhyay
https://arxiv.org/abs/2510.07335 https://
Holographic connection of f(G) gravity through Barrow and a generalized version of holographic dark fluid
Surajit Chattopadhyay
https://arxiv.org/abs/2510.07335 https://
MRI-Based Brain Tumor Detection through an Explainable EfficientNetV2 and MLP-Mixer-Attention Architecture
Mustafa Yurdakul, \c{S}akir Ta\c{s}demir
https://arxiv.org/abs/2509.06713
Crosslisted article(s) found for cs.AI. https://arxiv.org/list/cs.AI/new
[3/6]:
- In-Context Policy Adaptation via Cross-Domain Skill Diffusion
Minjong Yoo, Woo Kyung Kim, Honguk Woo
Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[3/8]:
- TACO: Enhancing Multimodal In-context Learning via Task Mapping-Guided Sequence Configuration
Yanshu Li, Jianjiang Yang, Tian Yun, Pinyuan Feng, Jinfa Huang, Ruixiang Tang
💰 Supports multiple dimensions and quantization options - binary 512d version outperforms OpenAI-v3-large while reducing vector database costs by 99.48%
🔍 Processes entire documents in single pass to generate chunk embeddings enriched with document-level context
🎯 Less sensitive to chunking strategies compared to traditional context-agnostic embedding models
Hypersurfaces of six-dimensional nearly K\"ahler manifolds
Mateo Anarella, Marie D'haene
https://arxiv.org/abs/2507.22526 https://arxiv.org/pdf/25…
H-Net : Hierarchical Dynamic Chunking for Tokenizer-Free Language Modelling in Morphologically-Rich Languages
Mehrdad Zakershahrak, Samira Ghodratnama
https://arxiv.org/abs/2508.05628
If you think there might be an AI bubble, but don't worry, even if it pops, how bad can it be?
Here's one number: seven tech companies (NVIDIA, Microsoft, Google, Apple, Meta, Tesla, Amazon) are worth ~20 trillion, ⇓ of total US stock market
For context: total subprime mortgages in 2008 were 1.3 trillion, total US mortgage debt 10 trillion
You have a 401k or index fonds? Your money is likely tied with the AI market. If you want that, that's fine. If you're …
Open strings in type IIB AdS$_3$ flux vacua
\'Alvaro Arboleya, Adolfo Guarino, Matteo Morittu, Giuseppe Sudano
https://arxiv.org/abs/2507.18529 https://
The Masses of Fermions in the context of the Supersymmetric $SU(3)_{C}\times SU(3)_{L}\times U(1)_{N}$ Model
M. C. Rodriguez
https://arxiv.org/abs/2508.11456 https://
Software Engineer Will Larson unpacks a lot in this July 2025 post. Key takeaway use cases of agentic AI include:
1. Using an LLM to evaluate a context window and get a result.
2. Using an LLM to suggest tools relevant to the context window, then enrich it with the tool’s response.
3. Managing flow control for tool usage.
4. Doing anything software can do to build better context windows to pass on to LLMs.
"What can agents actually do?"
Finding My Way: Influence of Different Audio Augmented Reality Navigation Cues on User Experience and Subjective Usefulness
Sina Hinzmann, Francesco Vona, Juliane Henning, Mohamed Amer, Omar Abdellatif, Tanja Kojic, Jan-Niklas Voigt-Antons
https://arxiv.org/abs/2509.03199
Combinatorics of monoidal actions in Lie-algebraic context
Volodymyr Mazorchuk, Xiaoyu Zhu
https://arxiv.org/abs/2509.01404 https://arxiv.org/pdf/2509.0140…
Next highest weight and other lower $SU(3)$ irreducible representations with proxy-$SU(4)$ symmetry for nuclei with $32 \le \mbox{Z,N} \le 46$
V. K. B. Kota
https://arxiv.org/abs/2510.00800
A JWST/MIRI view of k Andromedae b: Refining its mass, age, and physical parameters
N. Godoy, E. Choquet, E. Serabyn, M. Malin, P. Tremblin, C. Danielski, P. O. Lagage, A. Boccaletti, B. Charnay, M. E. Ressler
https://arxiv.org/abs/2509.03624
Am ***I*** a needle in somebody's haystack!?
https://arxiv.org/html/2502.05167v3
Limiting the Parameter Space for Unstable eV-scale Neutrinos Using IceCube Data
Abbasi, Ackermann, Adams, Agarwalla, Aguilar, Ahlers, Alameddine, Ali, Amin, Andeen, Arg\"uelles, Ashida, Athanasiadou, Axani, Babu, Bai, Baines-Holmes, V., Barwick, Bash, Basu, Bay, Beatty, Tjus, Behrens, Beise, Bellenghi, Benkel, BenZvi, Berley, Bernardini, Besson, Blaufuss, Bloom, Blot, Bodo, Bontempo, Motzkin, Meneguolo, B\"oser, Botner, B\"ottcher, Braun, Brinson, Brisson-Tsavoussis, Bur…
Replaced article(s) found for cs.LG. https://arxiv.org/list/cs.LG/new
[3/4]:
- Provable Low-Frequency Bias of In-Context Learning of Representations
Yongyi Yang, Hidenori Tanaka, Wei Hu
…
${C}^{3}$-GS: Learning Context-aware, Cross-dimension, Cross-scale Feature for Generalizable Gaussian Splatting
Yuxi Hu, Jun Zhang, Kuangyi Chen, Zhe Zhang, Friedrich Fraundorfer
https://arxiv.org/abs/2508.20754
Studying the black widow pulsars PSR J0312$-$0921 and PSR J1627$ $3219 in the optical and X-rays
A. V. Bobakov, A. Kirichenko, S. V. Zharikov, D. A. Zyuzin, A. V. Karpova, Yu. A. Shibanov, T. Begari
https://arxiv.org/abs/2509.01488
Suppression of Thermal Conductivity via Singlet-Dominated Scattering in TmFeO$_3$
M. L. McLanahan, D. Lederman, A. P. Ramirez
https://arxiv.org/abs/2507.12608
#VoyageAI introduces voyage-context-3, a contextualized chunk #embedding #llm that captures both chunk details and full document context 🔍
Early Approaches to Adversarial Fine-Tuning for Prompt Injection Defense: A 2022 Study of GPT-3 and Contemporary Models
Gustavo Sandoval, Denys Fenchenko, Junyao Chen
https://arxiv.org/abs/2509.14271
Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[3/4]:
- LaMPE: Length-aware Multi-grained Positional Encoding for Adaptive Long-context Scaling Without T...
Sikui Zhang, Guangze Gao, Ziyun Gan, Chunfeng Yuan, Zefeng Lin, Houwen Peng, Bing Li, Weiming Hu
@… thanks, I wonder whether Ventoy is compatible with 15.0.
https://www.reddit.com/r/freebsd/comments/1nh3z73/comment/nfe4kd9/?contex…
Lunar Reference Timescale
Adrien Bourgoin (LTE, Observatoire de Paris, Universit\'e PSL, CNRS, Sorbonne Universit\'e, LNE, Paris, France), Pascale Defraigne (Royal Observatory of Belgium, Brussels, Belgium), Fr\'ed\'eric Meynadier (Time Department, BIPM, Pavillon de Breteuil, S\`evres, France)
https://arxiv.org/abs/2507.215…
Useful thread from @… with latest UK covid positivity results (from last week). Going up a bit - more noticeably in certain areas.
For context, "positivity rate" isn't "how many people have covid overall": it's "when we bothered actually doing tests, how many of the tests came back positive".
So for example if you test 30 people, and 3 of the tests came back positive, that's a "10% positivity rate".
Sometimes there are blank places on the map where, if there _was_ any testing that week, they didn't bother sending it in.
#covid #UK #stats
The Impact of Axion-Like Particles on Late Stellar Evolution From Intermediate-Mass Stars to core-collapse Supernova Progenitors
Inmacolata Dom\'inguez, Oscar Straniero, Luciano Piersanti, Maurizio Giannotti, Alessandro Mirizzi
https://arxiv.org/abs/2508.17779
Replaced article(s) found for nlin.CD. https://arxiv.org/list/nlin.CD/new
[1/1]:
- Context parroting: A simple but tough-to-beat baseline for foundation models in scientific machin...
Yuanzhao Zhang, William Gilpin
GPT-OSS-20B: A Comprehensive Deployment-Centric Analysis of OpenAI's Open-Weight Mixture of Experts Model
Deepak Kumar, Divakar Yadav, Yash Patel
https://arxiv.org/abs/2508.16700
Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[2/3]:
- Context Reasoner: Incentivizing Reasoning Capability for Contextualized Privacy and Safety Compli...
Hu, Li, Jing, Hu, Zeng, Han, Xu, Chu, Hu, Song
Replaced article(s) found for cs.AI. https://arxiv.org/list/cs.AI/new
[3/4]:
- Dynamic Context Compression for Efficient RAG
Shuyu Guo, Zhaochun Ren
https://
Low-energy atomic scattering: s-wave relation between the interaction potential and the phase shift
Francesco Lorenzi, Luca Salansich
https://arxiv.org/abs/2507.20421 https://…
On the analytic rank of the twin prime elliptic curve $y^2=x(x-2)(x-p)$
Kirti Joshi
https://arxiv.org/abs/2508.12340 https://arxiv.org/pdf/2508.12340
Restricted Jacobi permutations
Alyssa G. Henke, Kyle R. Hoffman, Derek H. Stephens, Yongwei Yuan, Yan Zhuang
https://arxiv.org/abs/2509.11494 https://arxiv…
Unavailability of experimental 3D structural data on protein folding dynamics and necessity for a new generation of structure prediction methods in this context
Aydin Wells, Khalique Newaz, Jennifer Morones, Jianlin Cheng, Tijana Milenkovi\'c
https://arxiv.org/abs/2507.08188
Electron impact ro-vibrational transitions and dissociative recombination of H2 and HD : Rate coefficients and astrophysical implications
Riyad Hassaine, Emerance Djuissi, Nicolina Pop, Felix Iacob, Michel D. Ep\'ee Ep\'ee, Ousmanou Motapon, Vincenzo Laporta, Razvan Bogdan, Mehdi Ayouz, Mourad Telmini, Carla M. Coppola, Daniele Galli, Janos Zs. Mezei, Ioan F. Schneider
On the boundary Carrollian conformal algebra
Lucas Buzaglo, Xiao He, Tuan Anh Pham, Haijun Tan, Girish S Vishwa, Kaiming Zhao
https://arxiv.org/abs/2508.21603 https://
LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres
Ronal Singh, Shahroz Tariq, Fatemeh Jalalvand, Mohan Baruwal Chhetri, Surya Nepal, Cecile Paris, Martin Lochner
https://arxiv.org/abs/2508.18947
Rigidity of proper holomorphic self-mappings of the hexablock
Enchao Bi, Zeinab Shaaban, Guicong Su
https://arxiv.org/abs/2507.16176 https://
Dwarf-Dwarf interactions and their influence on star formation : Insights from post-merger galaxies
Rakshit Chauhan, Smitha Subramanian, Deepak A. Kudari, S. Amrutha, Mousumi Das
https://arxiv.org/abs/2507.14695
On finite extensions of lamplighter groups
Corentin Bodart
https://arxiv.org/abs/2507.13203 https://arxiv.org/pdf/2507.13203
Deep opacity and AI: A threat to XAI and to privacy protection mechanisms
Vincent C. M\"uller
https://arxiv.org/abs/2509.08835 https://arxiv.org/pdf/2…
🤖 Context-Aware Agents
Build agents maintaining context across hundreds of tool calls and multi-step workflows with complete #API documentation, tool definitions, and interaction histories without losing coherence.
💰 Pricing Structure
Standard rates: Prompts ≤200K tokens ($3/MTok input, $15/MTok output)
Extended context: Prompts >200K tokens ($6/MTok input, $22.50/MTok out…
Reionization in HESTIA: Studying reionization in the LG through zoom simulations
David Attard, Luke Conaboy, Noam Libeskind, Sergey Pillipenko, Keri Dixon, Ilian T. Iliev
https://arxiv.org/abs/2509.10133
CATVis: Context-Aware Thought Visualization
Tariq Mehmood, Hamza Ahmad, Muhammad Haroon Shakeel, Murtaza Taj
https://arxiv.org/abs/2507.11522 https://
An Interactive Platform for Unified Assessment of Drug-Drug Interactions Using Descriptive and Pharmacokinetic Data
Nadezhda Diadkina
https://arxiv.org/abs/2508.08351 https://…
Mapping Toxic Comments Across Demographics: A Dataset from German Public Broadcasting
Jan Fillies, Michael Peter Hoffmann, Rebecca Reichel, Roman Salzwedel, Sven Bodemer, Adrian Paschke
https://arxiv.org/abs/2508.21084
LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding
Replaced article(s) found for cs.AI. https://arxiv.org/list/cs.AI/new
[3/6]:
- A Layered Multi-Expert Framework for Long-Context Mental Health Assessments
Jinwen Tang, Qiming Guo, Wenbo Sun, Yi Shang
Theory space and stability analysis of General Relativistic cosmological solutions in modified gravity
Saikat Chakraborty, Piyabut Burikham
https://arxiv.org/abs/2509.15762 http…
Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[2/3]:
- MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language Models
Zhongzhan Huang, Guoming Ling, Shanshan Zhong, Hefeng Wu, Liang Lin
XUE 10. The CO$_2$-rich terrestrial planet-forming region of an externally irradiated Herbig disk
Jenny Frediani, Arjan Bik, Mar\'ia Claudia Ram\'irez-Tannus, Rens Waters, Konstantin V. Getman, Eric D. Feigelson, Bayron Portilla-Revelo, Beno\^it Tabone, Thomas J. Haworth, Andrew Winter, Thomas Henning, Giulia Perotti, Alexis Brandeker, Germ\'an Chaparro, Pablo Cuartas-Restrepo, Sebasti\'an Hern\'andez, Michael A. Kuhn, Thomas Preibisch, Veronica Roccatagliata, Sierk…
The higher spin $\Pi$-operator in Clifford analysis
Wanqing Cheng, Chao Ding
https://arxiv.org/abs/2508.11271 https://arxiv.org/pdf/2508.11271
Agentic UAVs: LLM-Driven Autonomy with Integrated Tool-Calling and Cognitive Reasoning
Anis Koubaa, Khaled Gabr
https://arxiv.org/abs/2509.13352 https://ar…
The Interstellar Medium in IZw18 seen with JWST/MIRI: I. Highly Ionized Gas
L. K. Hunt, A. Aloisi, M. G. Navarro, R. J. Rickards Vaught, B. T. Draine, A. Adamo, F. Annibali, D. Calzetti, S. Hernandez, B. L. James, M. Mingozzi, R. Schneider, M. Tosi, B. Brandl, M. G. del Valle-Espinosa, F. Donnan, A. S. Hirschauer, M. Meixner, D. Rigopoulou, C. T. Richardson, J. M. Levanti, A. R. Basu-Zych
OH on slack
User Story
As Overloaded Olivia, the backend engineer,
I want clear, actionable requirements with business context, so that I can implement the correct solution without burning half a day in meetings or wild guessing.
Acceptance Criteria
1. The story includes functional requirements (not just vibes).
2. Success/failure states are defined happy path edge cases).
3. Any dependencies or blockers are identified.
Bonus: PM/Designer reviewed this and it’s not just a draft in disguise.
#swe #sre #devops #devoops @… @…
Local existence theory for a class of CMC gauges for the Einstein-non-linear scalar field equations
Hans Ringstr\"om
https://arxiv.org/abs/2509.14110 https://
Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[3/5]:
- ILRe: Intermediate Layer Retrieval for Context Compression in Causal Language Models
Manlai Liang, Mandi Liu, Jiangzhou Ji, Huaijun Li, Haobo Yang, Yaohan He, Jinlong Li
The Few-shot Dilemma: Over-prompting Large Language Models
Yongjian Tang, Doruk Tuncel, Christian Koerner, Thomas Runkler
https://arxiv.org/abs/2509.13196 https://
HumorPlanSearch: Structured Planning and HuCoT for Contextual AI Humor
Shivam Dubey
https://arxiv.org/abs/2508.11429 https://arxiv.org/pdf/2508.11429
Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[3/10]:
- Context-aware Biases for Length Extrapolation
Ali Veisi, Hamidreza Amirzadeh, Amir Mansourian