
2025-07-23 09:36:42
From Logic to Language: A Trust Index for Problem Solving with LLMs
Tehseen Rug, Felix B\"ohmer, Tessa Pfattheicher
https://arxiv.org/abs/2507.16028 h…
From Logic to Language: A Trust Index for Problem Solving with LLMs
Tehseen Rug, Felix B\"ohmer, Tessa Pfattheicher
https://arxiv.org/abs/2507.16028 h…
Formal Analysis of Networked PLC Controllers Interacting with Physical Environments
Jaeseo Lee, Kyungmin Bae
https://arxiv.org/abs/2507.15596 https://
Notes on applicative matching logic
Laurentiu Leustean
https://arxiv.org/abs/2506.10088 https://arxiv.org/pdf/2506.10088
Cognitive Castes: Artificial Intelligence, Epistemic Stratification, and the Dissolution of Democratic Discourse
Craig S Wright
https://arxiv.org/abs/2507.14218
To add a single example here (feel free to chime in with your own):
Problem: editing code is sometimes tedious because external APIs require boilerplate.
Solutions:
- Use LLM-generated code. Downsides: energy use, code theft, potential for legal liability, makes mistakes, etc. Upsides: popular among some peers, seems easy to use.
- Pick a better library (not always possible).
- Build internal functions to centralize boilerplate code, then use those (benefits: you get a better understanding of the external API, and a more-unit-testable internal code surface; probably less amortized effort).
- Develop a non-LLM system that actually reasons about code at something like the formal semantics level and suggests boilerplate fill-ins based on rules, while foregrounding which rules it's applying so you can see the logic behind the suggestions (needs research).
Obviously LLM use in coding goes beyond this single issue, but there are similar analyses for each potential use of LLMs in coding. I'm all cases there are:
1. Existing practical solutions that require more effort (or in many cases just seem to but are less-effort when amortized).
2. Near-term researchable solutions that directly address the problem and which would be much more desirable in the long term.
Thus in addition to disastrous LLM effects on the climate, on data laborers, and on the digital commons, they tend to suck us into cheap-seeming but ultimately costly design practices while also crowding out better long-term solutions. Next time someone suggests how useful LLMs are for some task, try asking yourself (or them) what an ideal solution for that task would look like, and whether LLM use moves us closer to or father from a world in which that solution exists.
Fuzzy Lattice-based Description Logic
Yiwen Ding, Krishna Manoorkar
https://arxiv.org/abs/2506.05833 https://arxiv.org/pdf/2506.05833…
Logic Mining from Process Logs: Towards Automated Specification and Verification
Radoslaw Klimek, Julia Witek
https://arxiv.org/abs/2506.08628 https://
A Formal Refutation of the Blockchain Trilemma
Craig Wright
https://arxiv.org/abs/2507.05809 https://arxiv.org/pdf/2507.05809
Swarm-STL: A Framework for Motion Planning in Large-Scale, Multi-Swarm Systems
Shiyu Cheng, Luyao Niu, Bhaskar Ramasubramanian, Andrew Clark, Radha Poovendran
https://arxiv.org/abs/2506.14749
Trajectory Optimization for UAV-Based Medical Delivery with Temporal Logic Constraints and Convex Feasible Set Collision Avoidance
Kaiyuan Chen, Yuhan Suo, Shaowei Cui, Yuanqing Xia, Wannian Liang, Shuo Wang
https://arxiv.org/abs/2506.06038
Hazel Deriver: A Live Editor for Constructing Rule-Based Derivations
Zhiyao Zhong, Cyrus Omar
https://arxiv.org/abs/2506.10781 https://
This https://arxiv.org/abs/2504.07732 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csPL_…
Verifiable Natural Language to Linear Temporal Logic Translation: A Benchmark Dataset and Evaluation Suite
William H English, Chase Walker, Dominic Simon, Sumit Kumar Jha, Rickard Ewetz
https://arxiv.org/abs/2507.00877
AI's Euclid's Elements Moment: From Language Models to Computable Thought
Xinmin Fang, Lingfeng Tao, Zhengxiong Li
https://arxiv.org/abs/2506.23080
CCR 2.0: High-level Reasoning for Conditional Refinements
Youngju Song, Minki Cho
https://arxiv.org/abs/2507.04298 https://arxiv.org/…