
2025-06-16 10:22:19
Retrieval-Augmented Code Review Comment Generation
Hyunsun Hong, Jongmoon Baik
https://arxiv.org/abs/2506.11591 https://arxiv.org/pdf…
Retrieval-Augmented Code Review Comment Generation
Hyunsun Hong, Jongmoon Baik
https://arxiv.org/abs/2506.11591 https://arxiv.org/pdf…
Some developers say GPT-5 excels at technical reasoning and planning coding tasks and is cost-effective, but Claude Opus and Sonnet still produce better code (Lauren Goode/Wired)
https://www.wired.com/story/gpt-5-coding-review-software-engineering/
from my link log —
cargo-crev: A web-of-trust code review system for Rust.
https://github.com/crev-dev/cargo-crev
saved 2025-09-09 https://
A Review of Cloud Computing in Seismology
Yiyu Ni, Marine A. Denolle, Jannes Munchmeyer, Yinzhi Wang, Kuan-Fu Feng, Carlos Garcia Jurado Suarez, Amanda M. Thomas, Chad Trabant, Alex Hamilton, David Mencin
https://arxiv.org/abs/2506.11307
Quo Vadis, Code Review? Exploring the Future of Code Review
Michael Dorner, Andreas Bauer, Darja \v{S}mite, Lukas Thode, Daniel Mendez, Ricardo Britto, Stephan Lukasczyk, Ehsan Zabardast, Michael Kormann
https://arxiv.org/abs/2508.06879
Code-Switching in End-to-End Automatic Speech Recognition: A Systematic Literature Review
Maha Tufail Agro, Atharva Kulkarni, Karima Kadaoui, Zeerak Talat, Hanan Aldarmaki
https://arxiv.org/abs/2507.07741
Vibe Coding for UX Design: Understanding UX Professionals' Perceptions of AI-Assisted Design and Development
Jie Li, Youyang Hou, Laura Lin, Ruihao Zhu, Hancheng Cao, Abdallah El Ali
https://arxiv.org/abs/2509.10652
@… @… that's something I do reviewing humans. I usually start review with the tests, and if the tests seem to have gaps, the code probably does too. If they seem sufficiently thorough, the code probably needs less focus on the logic a…
Probing Pre-trained Language Models on Code Changes: Insights from ReDef, a High-Confidence Just-in-Time Defect Prediction Dataset
Doha Nam, Taehyoun Kim, Duksan Ryu, Jongmoon Baik
https://arxiv.org/abs/2509.09192
From Provable Correctness to Probabilistic Generation: A Comparative Review of Program Synthesis Paradigms
Zurabi Kobaladze, Anna Arnania, Tamar Sanikidze
https://arxiv.org/abs/2508.00013
"the new review of the earlier assessment does not dispute the conclusion that Russia favored the election of Donald J. Trump." #GiftLink
C.I.A. Says Its Leaders Rushed Report on Russia Interference in 2016 Vote - The New York Times
https://www.nytimes.com/2025/07/02/us/politics/russia-trump-2016-election.html?unlocked_article_code=1.Tk8.1oh0.obRdrVnArZxp&smid=url-share
ByteDance's publishing imprint 8th Note Press began informing writers and agents in late May that it was closing and returning publication rights to authors (Alexandra Alter/New York Times)
https://www.
Watching the frustratingly fruitless fights over the USEFULNESS of LLM-based coding helpers, I've come down to 3 points that explain why ppl seem to live in different realities:
Most programmers:
1) Write inconsequential remixes of trivial code that has been written many times before.
2) Lack the taste for good design & suck at code review in general (yours truly included).
3) Lack the judgement to differentiate between 1) & FOSS repos of nontrivial code, …
GitLab 18.1 supports Orcid identifiers in user profiles: https://about.gitlab.com/releases/2025/06/19/gitlab-18-1-released/#orcid-identifier-in-user-profile
ChatGPT for Code Refactoring: Analyzing Topics, Interaction, and Effective Prompts
Eman Abdullah AlOmar, Luo Xu, Sofia Martinez, Anthony Peruma, Mohamed Wiem Mkaouer, Christian D. Newman, Ali Ouni
https://arxiv.org/abs/2509.08090
Covid Vaccine Opponent Tapped to Lead Federal Review Team (Christina Jewett/New York Times)
https://www.nytimes.com/2025/08/22/health/covid-vaccines-rfk.html?unlocked_article_code=1.gE8.q-u0.YKJ7VwZQMZmg&smid=url-share
http://www.memeorandum.com/250822/p141#a250822p141
Beim #TagDerDigitalenFreiheit vom @… in #Tübingen am 26./27.07. gibts von mir einen Vortrag, wie man alles Mögliche mit :git:
Next up: principles of using AI for software dev.
For one, don't hand off code for review that you haven't reviewed yourself.
For another, exec expectations don't match current reality on what AI can('t) do.
#GophersUnite
Does AI Code Review Lead to Code Changes? A Case Study of GitHub Actions
Kexin Sun, Hongyu Kuang, Sebastian Baltes, Xin Zhou, He Zhang, Xiaoxing Ma, Guoping Rong, Dong Shao, Christoph Treude
https://arxiv.org/abs/2508.18771
Automating Thematic Review of Prevention of Future Deaths Reports: Replicating the ONS Child Suicide Study using Large Language Models
Sam Osian, Arpan Dutta, Sahil Bhandari, Iain E. Buchan, Dan W. Joyce
https://arxiv.org/abs/2507.20786
Teaching Programming in the Age of Generative AI: Insights from Literature, Pedagogical Proposals, and Student Perspectives
Clemente Rubio-Manzano, Jazna Meza, Rodolfo Fernandez-Santibanez, Christian Vidal-Castro
https://arxiv.org/abs/2507.00108
Automated Code Review Using Large Language Models at Ericsson: An Experience Report
Shweta Ramesh, Joy Bose, Hamender Singh, A K Raghavan, Sujoy Roychowdhury, Giriprasad Sridhara, Nishrith Saini, Ricardo Britto
https://arxiv.org/abs/2507.19115
“Developers on teams with high AI adoption complete 21% more tasks and merge 98% more pull requests, but PR review time increases 91%, revealing a critical bottleneck: human approval.“
Maybe it’s confirmation bias, but I can see that. You generate more, maybe harder to comprehend, code that still has to be double checked by people who weren’t involved in the process. That slows you down unless you ignore understanding by, you guessed it, moving fast and breaking things.
Benchmarking and Studying the LLM-based Code Review
Zhengran Zeng, Ruikai Shi, Keke Han, Yixin Li, Kaicheng Sun, Yidong Wang, Zhuohao Yu, Rui Xie, Wei Ye, Shikun Zhang
https://arxiv.org/abs/2509.01494 …
LLM-Driven Collaborative Model for Untangling Commits via Explicit and Implicit Dependency Reasoning
Bo Hou, Xin Tan, Kai Zheng, Fang Liu, Yinghao Zhu, Li Zhang
https://arxiv.org/abs/2507.16395
Fine-Tuning Multilingual Language Models for Code Review: An Empirical Study on Industrial C# Projects
Igli Begolli, Meltem Aksoy, Daniel Neider
https://arxiv.org/abs/2507.19271
from my link log —
Why some of us like "interdiff" code review.
https://gist.github.com/thoughtpolice/9c45287550a56b2047c6311fbadebed2
saved 2025-08-20
Automatic Identification of Machine Learning-Specific Code Smells
Peter Hamfelt, Ricardo Britto, Lincoln Rocha, Camilo Almendra
https://arxiv.org/abs/2508.02541 https://
Metamorphic Testing of Deep Code Models: A Systematic Literature Review
Ali Asgari, Milan de Koning, Pouria Derakhshanfar, Annibale Panichella
https://arxiv.org/abs/2507.22610 h…
Automated Code Review Using Large Language Models with Symbolic Reasoning
Busra Icoz, Goksel Biricik
https://arxiv.org/abs/2507.18476 https://arxiv.org/pdf…
Measuring the effectiveness of code review comments in GitHub repositories: A machine learning approach
Shadikur Rahman, Umme Ayman Koana, Hasibul Karim Shanto, Mahmuda Akter, Chitra Roy, Aras M. Ismael
https://arxiv.org/abs/2508.16053
AI-Assisted Fixes to Code Review Comments at Scale
Chandra Maddila, Negar Ghorbani, James Saindon, Parth Thakkar, Vijayaraghavan Murali, Rui Abreu, Jingyue Shen, Brian Zhou, Nachiappan Nagappan, Peter C. Rigby
https://arxiv.org/abs/2507.13499
Previously on... Automating Code Review
Robert Heum\"uller, Frank Ortmeier
https://arxiv.org/abs/2508.18003 https://arxiv.org/pdf/2508.18003
An Empirical Study on the Amount of Changes Required for Merge Request Acceptance
Samah Kansab, Mohammed Sayagh, Francis Bordeleau, Ali Tizghadam
https://arxiv.org/abs/2507.23640
Machine Learning Pipeline for Software Engineering: A Systematic Literature Review
Samah Kansab
https://arxiv.org/abs/2508.00045 https://arxiv.org/pdf/2508…
Socio-Technical Smell Dynamics in Code Samples: A Multivocal Review on Emergence, Evolution, and Co-Occurrence
Arthur Bueno, Bruno Cafeo, Maria Cagnin, Awdren Font\~ao
https://arxiv.org/abs/2507.13481
Uncovering Systematic Failures of LLMs in Verifying Code Against Natural Language Specifications
Haolin Jin, Huaming Chen
https://arxiv.org/abs/2508.12358 https://
The Impact of Large Language Models (LLMs) on Code Review Process
Antonio Collante, Samuel Abedu, SayedHassan Khatoonabadi, Ahmad Abdellatif, Ebube Alor, Emad Shihab
https://arxiv.org/abs/2508.11034
Automated Validation of LLM-based Evaluators for Software Engineering Artifacts
Ora Nova Fandina, Eitan Farchi, Shmulik Froimovich, Rami Katan, Alice Podolsky, Orna Raz, Avi Ziv
https://arxiv.org/abs/2508.02827
WIP: Leveraging LLMs for Enforcing Design Principles in Student Code: Analysis of Prompting Strategies and RAG
Dhruv Kolhatkar, Soubhagya Akkena, Edward F. Gehringer
https://arxiv.org/abs/2508.11717
ChangePrism: Visualizing the Essence of Code Changes
Lei Chen, Michele Lanza, Shinpei Hayashi
https://arxiv.org/abs/2508.12649 https://arxiv.org/pdf/2508.1…
LLMs in Coding and their Impact on the Commercial Software Engineering Landscape
Vladislav Belozerov, Peter J Barclay, Askhan Sami
https://arxiv.org/abs/2506.16653
Replaced article(s) found for cs.SE. https://arxiv.org/list/cs.SE/new
[1/1]:
- Knowledge-Guided Prompt Learning for Request Quality Assurance in Public Code Review
Lin Li, Xinchun Yu, Xinyu Chen, Peng Liang