
2025-07-23 16:50:21
Dolphins QB Tua Tagovailoa says relationship with WR Tyreek Hill 'still a work in progress' https://www.nfl.com/news/dolphins-qb-tua-tagovailoa-says-relationship-with-wr-tyreek-hill-still-a-work-in-progress
Dolphins QB Tua Tagovailoa says relationship with WR Tyreek Hill 'still a work in progress' https://www.nfl.com/news/dolphins-qb-tua-tagovailoa-says-relationship-with-wr-tyreek-hill-still-a-work-in-progress
Raiders Confident in Group That is a Work in Progress https://www.si.com/nfl/raiders/news/raiders-training-camp-elandon-roberts-devin-white-pete-carroll
AnyMAC: Cascading Flexible Multi-Agent Collaboration via Next-Agent Prediction
Song Wang, Zhen Tan, Zihan Chen, Shuang Zhou, Tianlong Chen, Jundong Li
https://arxiv.org/abs/2506.17784
Historical Tech Tree
The tech tree is an interactive visualization of technological history from 3 million years ago to today. A work in progress, it currently contains 1988 technologies and 2369 connections between them.
⏳ https://www.historicaltechtree.com
Understanding the Drag Torque in Common Envelope Evolution
Soumik Bhattacharyya, Luke Chamandy, Eric G. Blackman, Adam Frank, Baowei Liu
https://arxiv.org/abs/2506.19547
OpenNav: Open-World Navigation with Multimodal Large Language Models
Mingfeng Yuan, Letian Wang, Steven L. Waslander
https://arxiv.org/abs/2507.18033 https://
Better Bounds for Semi-Streaming Single-Source Shortest Paths
Sepehr Assadi, Gary Hoppenworth, Janani Sundaresan
https://arxiv.org/abs/2507.17841 https://a…
Spotted this Progress Pride Graffiti at the weekend in a carpark that I often use. There's some awesome work there!
#Graffiti #ProgressPride #Pride
The results of the development of the SPHERE-3 detector for studying the PCR mass composition in the 1-1000 PeV energy range. The status of 2025
D. V. Chernov, E. A. Bonvech, O. V. Cherkesova, E. L. Entina, V. I. Galkin, V. A. Ivanov, T. A. Kolodkin, N. O. Ovcharenko, D. A. Podgrudkov, T. M. Roganova, M. D. Ziva
https://arxiv.or…
Lower Bounds against the Ideal Proof System in Finite Fields
Tal Elbaz, Nashlen Govindasamy, Jiaqi Lu, Iddo Tzameret
https://arxiv.org/abs/2506.17210 https…
Various thi.ng updates, bug fixes, additions and new version of https://github.com/thi-ng/zig-thing/ — now fully compatible with current Zig v0.14.1
On a more diary/devlog note: I also updated several of my Zig based work-in-progress art pieces to the latest version (some of them not touc…
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
Graph-Coarsening for Machine Learning Coarse-grained Molecular Dynamics
Soumya Mondal, Subhanu Halder, Debarchan Basu, Sandeep Kumar, Tarak Karmakar
https://arxiv.org/abs/2507.16531
Raiders’ long ball connection remains a work in progress https://www.reviewjournal.com/sports/raiders/raiders-long-ball-connection-remains-a-work-in-progress-3422923/
WIP: Leveraging LLMs for Enforcing Design Principles in Student Code: Analysis of Prompting Strategies and RAG
Dhruv Kolhatkar, Soubhagya Akkena, Edward F. Gehringer
https://arxiv.org/abs/2508.11717
Did a little 9 mile loop this morning on the bike... Later on the way to work I spotted the progress of the new bike path on Lisbon Avenue. Lisbon has gone from a street I didn't even like to drive on to a street I look forward to biking on.
#mke #milwaukee
From Near-Integrable to Far-from-Integrable: A Unified Picture of Thermalization and Heat Transport
Weicheng Fu, Zhen Wang, Yisen Wang, Yong Zhang, Hong Zhao
https://arxiv.org/abs/2508.15566
@… Do share screenshots, etc. if you’re up for it. I’d love to see your work in progress. 😄
ToxiFrench: Benchmarking and Enhancing Language Models via CoT Fine-Tuning for French Toxicity Detection
Axel Delaval, Shujian Yang, Haicheng Wang, Han Qiu, Jialiang Lu
https://arxiv.org/abs/2508.11281
DMOSpeech 2: Reinforcement Learning for Duration Prediction in Metric-Optimized Speech Synthesis
Yinghao Aaron Li, Xilin Jiang, Fei Tao, Cheng Niu, Kaifeng Xu, Juntong Song, Nima Mesgarani
https://arxiv.org/abs/2507.14988
How popular media gets love wrong
Now a bit of background about why I have this "engineered" model of love:
First, I'm a white straight cis man. I've got a few traits that might work against my relationship chances (e.g., neurodivergence; I generally fit pretty well into the "weird geek" stereotype), but as I was recently reminded, it's possible my experience derives more from luck than other factors, and since things are tilted more in my favor than most people on the planet, my advice could be worse than useless if it leads people towards strategies that would only have worked for someone like me. I don't *think* that's the case, but it's worth mentioning explicitly.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
When I first started dating my now-wife, we were both in graduate school. I was 26, and had exactly zero dating/romantic experience though that point in my life. In other words, a pretty stereotypical "incel" although I definitely didn't subscribe to incel ideology at all. I felt lonely, and vaguely wanted a romantic relationship (I'm neither aromantic nor asexual), but had never felt socially comfortable enough to pursue one before. I don't drink and dislike most social gatherings like parties or bars; I mostly hung around the fringes of the few college parties I attended, and although I had a reasonable college social life in terms of friends, I didn't really do anything to pursue romance, feeling too awkward to know where to start. I had the beginnings of crushes in both high school and college, but never developed a really strong crush, probably correlated with not putting myself in many social situations outside of close all-male friend gatherings. I never felt remotely comfortable enough to act on any of the proto-crushes I did have. I did watch porn and masturbate, so one motivation for pursuing a relationship was physical intimacy, but loneliness was as much of a motivating factor, and of course the social pressure to date was a factor too, even though I'm quite contrarian.
I'm lucky in that I had some mixed-gender social circles already like intramural soccer and a graduate-student housing potluck. Graduate school makes a *lot* more of these social spaces accessible, so I recognize that those not in school of some sort have a harder time of things, especially if like me they don't feel like they fit in in typical adult social spaces like bars.
However, at one point I just decided that my desire for a relationship would need action on my part and so I'd try to build a relationship and see what happened. I worked up my courage and asked one of the people in my potluck if she'd like to go for a hike (pretty much clearly a date but not explicitly one; in retrospect not the best first-date modality in a lot of ways, but it made a little more sense in our setting where we could go for a hike from our front door). To emphasize this point: I was not in love with (or even infatuated with) my now-wife at that point. I made a decision to be open to building a relationship, but didn't follow the typical romance story formula beyond that. Now of course, in real life as opposed to popular media, this isn't anything special. People ask each other out all the time just because they're lonely, and some of those relationships turn out fine (although many do not).
I was lucky in that some aspects of who I am and what I do happened to be naturally comforting to my wife (natural advantage in the "appeal" model of love) but of course there are some aspects of me that annoy my wife, and we negotiate that. In the other direction, there's some things I instantly liked about my wife, and other things that still annoy me. We've figured out how to accept a little, change a little, and overall be happy with each other (though we do still have arguments; it's not like the operation/construction/maintenance of the "love mechanism" is always perfectly smooth). In particular though, I approached the relationship with the attitude of "I want to try to build a relationship with this person," at first just because of my own desires for *any* relationship, and then gradually more and more through my desire to build *this specific* relationship as I enjoyed the rewards of companionship.
So for example, while I think my wife is objectively beautiful, she's also *subjectively* very beautiful *to me* because having decided to build a relationship with her, I actively tried to see her as beautiful, rather than trying to judge whether I wanted a relationship with her based on her beauty. In other words, our relationship is more causative of her beauty-to-me than her beauty-to-me is causative of our relationship. This is the biggest way I think the "engineered" model of love differs from the "fire" and "appeal" models: you can just decide to build love independent of factors we typically think of as engendering love (NOT independent of your partner's willingness to participate, of course), and then all of those things like "thinking your partner is beautiful" can be a result of the relationship you're building. For sure those factors might affect who is willing to try building a relationship with you in the first place, but if more people were willing to jump into relationship building (not necessarily with full commitment from the start) without worrying about those other factors, they might find that those factors can come out of the relationship instead of being prerequisites for it. I think this is the biggest failure of the "appeal" model in particular: yes you *do* need to do things that appeal to your partner, but it's not just "make myself lovable" it's also: is your partner putting in the effort to see the ways that you are beautiful/lovable/etc., or are they just expecting you to become exactly some perfect person they've imagined (and/or been told to desire by society)? The former is perfectly possible, and no less satisfying than the latter.
To cut off my rambling a bit here, I'll just add that in our progress from dating through marriage through staying-married, my wife and I have both talked at times explicitly about commitment, and especially when deciding to get married, I told her that I knew I couldn't live up to the perfect model of a husband that I'd want to be, but that if she wanted to deepen our commitment, I was happy to do that, and so we did. I also rearranged my priorities at that point, deciding that I knew I wanted to prioritize this relationship above things like my career or my research interests, and while I've not always been perfect at that in my little decisions, I've been good at holding to that in my big decisions at least. In the end, *once we had built a somewhat-committed relationship*, we had something that we both recognized was worth more than most other things in life, and that let us commit even more, thus getting even more out of it in the long term. Obviously you can't start the first date with an expectation of life-long commitment, and you need to synchronize your increasing commitment to a relationship so that it doesn't become lopsided, which is hard. But if you take the commitment as an active decision and as the *precursor* to things like infatuation, attraction, etc., you can build up to something that's incredibly strong and rewarding.
I'll follow this up with one more post trying to distill some advice from my ramblings.
#relationships #love
Is Transfer Learning Necessary for Violin Transcription?
Yueh-Po Peng, Ting-Kang Wang, Li Su, Vincent K. M. Cheung
https://arxiv.org/abs/2508.13516 https://
Statistics in 3d gravity from knots and links
Jeevan Chandra
https://arxiv.org/abs/2508.10864 https://arxiv.org/pdf/2508.10864
Continuing on my Meshtastic kick, (probably because I keep buying radios to play with...) This time I have a configuration guide for common settings I've found which are useful. Still a work in progress but I think most of the common options are there.
If I've forgotten something, gotten something wrong, or you have a trick I should add, let me know!
WIP: Exploring the Value of a Debugging Cheat Sheet and Mini Lecture in Improving Undergraduate Debugging Skills and Mindset
Andrew Ash, John Hu
https://arxiv.org/abs/2506.11339
WIP: Turning Fake Chips into Learning Opportunities
Haniye Mehraban, Saad Azmeen-ur-Rahman, John Hu
https://arxiv.org/abs/2507.13281 https://
Vela: Scalable Embeddings with Voice Large Language Models for Multimodal Retrieval
Ruofan Hu, Yan Xia, Minjie Hong, Jieming Zhu, Bo Chen, Xiaoda Yang, Minghui Fang, Tao Jin
https://arxiv.org/abs/2506.14445
Work in progress
It's almost time! We can't wait to welcome you to Berlin Buzzwords tomorrow for our 16th edition!
Grab your Last Minute Tickets now: https://tickets.plainschwarz.com/bbuzz25/c/7bjWaoGyO/
Versatile Wavelength-Division Multiplexed Quantum Key Distribution Network Operating Simultaneously in the O and C Bands
Davide Scalcon, Matteo Padovan, Paolo Villoresi, Giuseppe Vallone, Marco Avesani
https://arxiv.org/abs/2507.11175
Clearly a work in progress. Lots of promise, but very disjointed while chemistry is built, fitness is gained, and adaptations to a faster and more physical league are happening. It's a lot of roster changes.
The defense look shaky, but often they were put in bad positions by turnovers with too many players committed. Outside backs and midfielders.
Thrilled for Chiesa to get one. Happy for Mo to continue the opening day streak.
Ekitike is my MOTM. He already looks comfor…
Scaling Learned Image Compression Models up to 1 Billion
Yuqi Li, Haotian Zhang, Li Li, Dong Liu, Feng Wu
https://arxiv.org/abs/2508.09075 https://arxiv.or…
Also, in other other #ThingUmbrella related news. My current consulting/advisor contract finishes end of the month, so I will have some time to write about various other recent updates (and unpublished work-in-progress) in early August... Please stay tuned! ✌️
🤩 The latest Safari 26 Beta does not disappoint in terms of #CSS features:
⚓️ Anchor Positioning!
🪄 Scroll-driven animations!
🌯 text-wrap: pretty!
🌈 contrast-color()!
⏲️ progress()!
and more!
And this sentence: “We value the principles of progressive enhancement and separation of concerns.” 😉
Great work, @…
#TooLong4FessHole I have been trying to become better friends with the partner of my bestie for over a decade since they hooked up - for the obvious reasons. I never felt as though I was making any progress; all of the work was being done by me and I still felt like I was receiving "disapproving father in law" vibes.
Recently I behaved poorly about something; I immediately recon…
Shipwright: Proving liveness of distributed systems with Byzantine participants
Derek Leung, Nickolai Zeldovich, Frans Kaashoek
https://arxiv.org/abs/2507.14080
Integrated microheater on the 4H-silicon-carbide-on-insulator platform and its applications
Wenhan Sun, Ruixuan Wang, Jingwei Li, Haipeng Zhang, Zhensheng Jia, Qing Li
https://arxiv.org/abs/2506.15035
No More Marching: Learning Humanoid Locomotion for Short-Range SE(2) Targets
Pranay Dugar, Mohitvishnu S. Gadde, Jonah Siekmann, Yesh Godse, Aayam Shrestha, Alan Fern
https://arxiv.org/abs/2508.14098
Global linear drift-wave eigenmode structures on flux surfaces in stellarators: ion temperature gradient mode
Hongxuan Zhu, H. Chen, Z. Lin, A. Bhattacharjee
https://arxiv.org/abs/2506.12948
Proceedings 14th International Workshop on Trends in Functional Programming in Education
Rose Bohrer (AIST, Tokyo, JP)
https://arxiv.org/abs/2508.02305 https://
Cowboys Coach Speaks Out On New QB’s Progress https://heavy.com/sports/nfl/dallas-cowboys/joe-milton-doing-well-progress/?adt_ei=[email]
Slow progress the last couple days because I've been going into the office for work and the commute is eating up my time I'd otherwise spend on such things.
But now it's the weekend and the switch engine board is coming together nicely.
Still have 514 nets to route - mostly the supervisor, line card management buses, and power supply but also some other odds and ends like the FPGA JTAG and part of the SPI flash.
Also I have to get the tach/PWM signals from the m…
Beyond Formal Semantics for Capabilities and Skills: Model Context Protocol in Manufacturing
Luis Miguel Vieira da Silva, Aljosha K\"ocher, Felix Gehlhoff
https://arxiv.org/abs/2506.11180
Evaluating LLMs on Chinese Idiom Translation
Cai Yang, Yao Dou, David Heineman, Xiaofeng Wu, Wei Xu
https://arxiv.org/abs/2508.10421 https://arxiv.org/pdf/…
Towards Automatic Evaluation and High-Quality Pseudo-Parallel Dataset Construction for Audio Editing: A Human-in-the-Loop Method
Yuhang Jia, Hui Wang, Xin Nie, Yujie Guo, Lianru Gao, Yong Qin
https://arxiv.org/abs/2508.11966
2-dimensional TFTs via modular $\infty$-operads
Jan Steinebrunner
https://arxiv.org/abs/2506.22104 https://arxiv.org/pdf/2506.22104…
This https://arxiv.org/abs/2506.05678 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csLG_…
#PhantastikPrompts 10.06.:
Wie hältst du es mit geschlechtergerechter Sprache?
Wer bei mir mitliest, dürfte gesehen haben, dass ich nach Möglichkeit neutrale Formen verwende (und wo es sich absolut nicht ausgeht damit, den Stern verwende). Bis hin dazu, dass ich sowohl für den Eigengebrauch, als auch für den Fremdgebrauch und die Romanwelt ein paar Neologismen geschaffen ha…
MMHU: A Massive-Scale Multimodal Benchmark for Human Behavior Understanding
Renjie Li, Ruijie Ye, Mingyang Wu, Hao Frank Yang, Zhiwen Fan, Hezhen Hu, Zhengzhong Tu
https://arxiv.org/abs/2507.12463
This https://arxiv.org/abs/2405.08719 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_sta…
CLASS_SZ II: Notes and Examples of Fast and Accurate Calculations of Halo Model, Large Scale Structure and Cosmic Microwave Background Observables
Boris Bolliet, Aleksandra Kusiak, Fiona McCarthy, Alina Sabyr, Kristen Surrao, Jens Chluba, Carmen Embil Villagra, Simone Ferraro, Boryana Hadzhiyska, Dongwon Han, J. Colin Hill, Juan Francisco Mac\'ias-P\'erez, Mathew Madhavacheril, Abhishek Maniyar, Yogesh Mehta, Shivam Pandey, Emmanuel Schaan, Blake Sherwin, Alessio Spurio Mancini…
Reliable Magnetometry for Antiferromagnets and Thin Films: Correcting Substrate Artifacts in Mn3Sn/MgO Systems
Katarzyna Gas, Maciej Sawicki
https://arxiv.org/abs/2507.01385
This https://arxiv.org/abs/2404.08578 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_mat…
Breaking a Logarithmic Barrier in the Stopping Time Convergence Rate of Stochastic First-order Methods
Yasong Feng, Yifan Jiang, Tianyu Wang, Zhiliang Ying
https://arxiv.org/abs/2506.23335
AEGISS -- Atomic orbital and Entropy-based Guided Inference for Space Selection -- A novel semi-automated active space selection workflow for quantum chemistry and quantum computing applications
Fabio Tarocco, Pi A. B. Haase, Fabijan Pavo\v{s}evi\'c, Vijay Krishna, Leonardo Guidoni, Stefan Knecht, Martina Stella
https://arxiv.org/abs/2…
Promoting Efficient Reasoning with Verifiable Stepwise Reward
Chuhuai Yue, Chengqi Dong, Yinan Gao, Hang He, Jiajun Chai, Guojun Yin, Wei Lin
https://arxiv.org/abs/2508.10293 ht…
OpenCodeReasoning-II: A Simple Test Time Scaling Approach via Self-Critique
Wasi Uddin Ahmad, Somshubra Majumdar, Aleksander Ficek, Sean Narenthiran, Mehrzad Samadi, Jocelyn Huang, Siddhartha Jain, Vahid Noroozi, Boris Ginsburg
https://arxiv.org/abs/2507.09075
Judgment as Coordination: A Joint Systems View of Visualization Design Practice
Paul C. Parsons, Arran Ridley
https://arxiv.org/abs/2507.01209 https://
Hydrodynamic Effects in Cryogenic Buffer Gas Cells: Design Insights from Hybrid Simulations
Nick Vogeley, Bernd Bauerhenne, Daqing Wang
https://arxiv.org/abs/2508.04364 https://…
The Graph Structure of a Class of Permutation Maps over Ring $\mathbb{Z}_{p^k}$
Kai Tan, Chengqing Li
https://arxiv.org/abs/2506.20118 https://
Exponential mixing for the stochastic Kuramoto-Sivashinsky equation on the 1D torus
Peng Gao, Hung D. Nguyen
https://arxiv.org/abs/2508.01794 https://arxiv…
Constraint Maps: Insights and Related Themes
Alessio Figalli, Andr\'e Guerra, Sunghan Kim, Henrik Shahgholian
https://arxiv.org/abs/2506.23608 https://…
Echo-4o: Harnessing the Power of GPT-4o Synthetic Images for Improved Image Generation
Junyan Ye, Dongzhi Jiang, Zihao Wang, Leqi Zhu, Zhenghao Hu, Zilong Huang, Jun He, Zhiyuan Yan, Jinghua Yu, Hongsheng Li, Conghui He, Weijia Li
https://arxiv.org/abs/2508.09987
WIP: Large Language Model-Enhanced Smart Tutor for Undergraduate Circuit Analysis
Liangliang Chen, Huiru Xie, Jacqueline Rohde, Ying Zhang
https://arxiv.org/abs/2506.08962
Dyna-Think: Synergizing Reasoning, Acting, and World Model Simulation in AI Agents
Xiao Yu, Baolin Peng, Ruize Xu, Michel Galley, Hao Cheng, Suman Nath, Jianfeng Gao, Zhou Yu
https://arxiv.org/abs/2506.00320
Boosting Parameter Efficiency in LLM-Based Recommendation through Sophisticated Pruning
Shanle Zheng, Keqin Bao, Jizhi Zhang, Yang Zhang, Fuli Feng, Xiangnan He
https://arxiv.org/abs/2507.07064
ChemDFM-R: An Chemical Reasoner LLM Enhanced with Atomized Chemical Knowledge
Zihan Zhao, Bo Chen, Ziping Wan, Lu Chen, Xuanze Lin, Shiyang Yu, Situo Zhang, Da Ma, Zichen Zhu, Danyang Zhang, Huayang Wang, Zhongyang Dai, Liyang Wen, Xin Chen, Kai Yu
https://arxiv.org/abs/2507.21990
This https://arxiv.org/abs/2503.09532 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csLG_…
Pride progress? As gay pro athletes consider coming out, each announcement makes a mark https://www.nytimes.com/athletic/6396179/2025/06/03/pride-month-gay-pro-athletes-evolution/
DreamAnywhere: Object-Centric Panoramic 3D Scene Generation
Edoardo Alberto Dominici, Jozef Hladky, Floor Verhoeven, Lukas Radl, Thomas Deixelberger, Stefan Ainetter, Philipp Drescher, Stefan Hauswiesner, Arno Coomans, Giacomo Nazzaro, Konstantinos Vardis, Markus Steinberger
https://arxiv.org/abs/2506.20367
You can use three.js as GIS to render contour lines from a Geotiff with customizable labels! Work in progress.
(P.S. can you tell where this might be? ;) ) #gischat
Post-AGB Binaries as Interacting Systems
Hans Van Winckel
https://arxiv.org/abs/2507.02514 https://arxiv.org/pdf/2507.02514
Rethinking Backbone Design for Lightweight 3D Object Detection in LiDAR
Adwait Chandorkar, Hasan Tercan, Tobias Meisen
https://arxiv.org/abs/2508.00744 https://
First-principles phonon physics using the Pheasy code
Changpeng Lin, Jian Han, Ben Xu, Nicola Marzari
https://arxiv.org/abs/2508.01020 https://arxiv.org/pd…
An Update on the Raiders' Training Camp Progress https://www.si.com/nfl/raiders/las-vegas-geno-smith-training-camp-jakobi-meyers-pete-carroll-brock-bowers
Repeating Flares, X-ray Outbursts and Delayed Infrared Emission: A Comprehensive Compilation of Optical Tidal Disruption Events
D. A. Langis, I. Liodakis, K. I. I. Koljonen, A. Paggi, N. Globus, L. Wyrzykowski, P. J. Miko{\l}ajczyk, K. Kotysz, P. Zieli\'nski, N. Ihanec, J. Ding, D. Morshed, Z. Torres
https://arxiv.org/abs/25…
"In a world that demands spectacle, moss reminds us: softness can be a strategy." -- more @ #activism
AccessGuru: Leveraging LLMs to Detect and Correct Web Accessibility Violations in HTML Code
Nadeen Fathallah, Daniel Hern\'andez, Steffen Staab
https://arxiv.org/abs/2507.19549
Multimodal Mathematical Reasoning with Diverse Solving Perspective
Wenhao Shi, Zhiqiang Hu, Yi Bin, Yang Yang, See-Kiong Ng, Heng Tao Shen
https://arxiv.org/abs/2507.02804
Time-Masked Transformers with Lightweight Test-Time Adaptation for Neural Speech Decoding
Ebrahim Feghhi, Shreyas Kaasyap, Nima Hadidi, Jonathan C. Kao
https://arxiv.org/abs/2507.02800
GEAR: Gaze-Enabled Human-Robot Collaborative Assembly
Asad Ali Shahid, Angelo Moroncelli, Drazen Brscic, Takayuki Kanda, Loris Roveda
https://arxiv.org/abs/2507.18947 https://…
Nonparametric Reaction Coordinate Optimization with Histories: A Framework for Rare Event Dynamics
Polina V. Banushkina, Sergei V. Krivov
https://arxiv.org/abs/2508.07326 https:…
RAAG: Ratio Aware Adaptive Guidance
Shangwen Zhu, Qianyu Peng, Yuting Hu, Zhantao Yang, Han Zhang, Zhao Pu, Ruili Feng, Fan Cheng
https://arxiv.org/abs/2508.03442 https://
WebArXiv: Evaluating Multimodal Agents on Time-Invariant arXiv Tasks
Zihao Sun, Meng Fang, Ling Chen
https://arxiv.org/abs/2507.00938 https://
Joint ASR and Speaker Role Tagging with Serialized Output Training
Anfeng Xu, Tiantian Feng, Shrikanth Narayanan
https://arxiv.org/abs/2506.10349 https://
Dissipative Coupling in Photonic and Plasmonic Resonators
Tong Wu, Philippe Lalanne
https://arxiv.org/abs/2507.20132 https://arxiv.org/pdf/2507.20132
Prometheus: Unified Knowledge Graphs for Issue Resolution in Multilingual Codebases
Zimin Chen, Yue Pan, Siyu Lu, Jiayi Xu, Claire Le Goues, Martin Monperrus, He Ye
https://arxiv.org/abs/2507.19942
URSA: The Universal Research and Scientific Agent
Michael Grosskopf, Russell Bent, Rahul Somasundaram, Isaac Michaud, Arthur Lui, Nathan Debardeleben, Earl Lawrence
https://arxiv.org/abs/2506.22653
Adversarial Defence without Adversarial Defence: Enhancing Language Model Robustness via Instance-level Principal Component Removal
Yang Wang, Chenghao Xiao, Yizhi Li, Stuart E. Middleton, Noura Al Moubayed, Chenghua Lin
https://arxiv.org/abs/2507.21750
Shape-for-Motion: Precise and Consistent Video Editing with 3D Proxy
Yuhao Liu, Tengfei Wang, Fang Liu, Zhenwei Wang, Rynson W. H. Lau
https://arxiv.org/abs/2506.22432
Simulating Human Behavior with the Psychological-mechanism Agent: Integrating Feeling, Thought, and Action
Qing Dong, Pengyuan Liu, Dong Yu, Chen Kang
https://arxiv.org/abs/2507.19495
FlowETL: An Autonomous Example-Driven Pipeline for Data Engineering
Mattia Di Profio, Mingjun Zhong, Yaji Sripada, Marcel Jaspars
https://arxiv.org/abs/2507.23118 https://
A Systematic Review of Human-AI Co-Creativity
Saloni Singh, Koen Hndriks, Drik Heylen, Kim Baraka
https://arxiv.org/abs/2506.21333 https://
Active Inference AI Systems for Scientific Discovery
Karthik Duraisamy
https://arxiv.org/abs/2506.21329 https://arxiv.org/pdf/2506.21…
Setting The Table with Intent: Intent-aware Schema Generation and Editing for Literature Review Tables
Vishakh Padmakumar, Joseph Chee Chang, Kyle Lo, Doug Downey, Aakanksha Naik
https://arxiv.org/abs/2507.19521