
2025-07-08 20:39:07
Visual artist Nicole Wittenberg on resisting the pressure of productivity culture
#creativity #artist
Visual artist Nicole Wittenberg on resisting the pressure of productivity culture
#creativity #artist
The Impact of LLM-Assistants on Software Developer Productivity: A Systematic Literature Review
Amr Mohamed, Maram Assi, Mariam Guizani
https://arxiv.org/abs/2507.03156
No, computers won’t replace humans to write code for themselves.
Please stop with this nonsense.
What we will see though is tremendous losses in productivity as deskilled programmers will get less and less education and practice—and take longer and longer to make broken AI-generated code work. Meanwhile, AI models will regress from eating their own generated shit when being trained on.
Eventually AI companies will finally run out of investors to scam—and when they disappear or get so expensive they become unaffordable, “prompt engineers” will be asked to not use AI anymore.
What’s gonna happen then?
We’re losing a whole generation of programmers to this while thought leaders in our field are talking about “inevitability” and are jerking off to sci-fi-nostalgia-fueled fantasies of AGI.
A recent three-month trial of Microsoft's M365 Copilot within a UK government department has found no definitive evidence of improved productivity, despite some promising results for specific tasks.
https://www.computing.co.uk/news/2025/uk-g…
Long-Term Experiences From Working with Extended Reality in the Wild
Verena Biener, Florian Jack Winston, Dieter Schmalstieg, Alexander Plopski
https://arxiv.org/abs/2509.05067 …
You ask your roommate to buy toilet paper. They show you the receipt as proof. The next morning, when you need toilet paper, the drawer is actually empty. This is because they used an innovative new method called Lean Shopping, where instead of buying the things they just print out a receipt — saving time and money.
This is a story about the social nature of problem framing, and when "high velocity" becomes less productive.
"Clean energy subsidies should be replaced with ‘market-based incentives’ from 2030, Australia’s Productivity Commission says"
#Australia #Energy #Renewables
I'm actually excited about using an iPad as a productivity device? That was not on my bingo card #wwdc25
Beyond Productivity Gaps: Temporal Patterns of Gender Differences in Scientific Knowledge Creation
Bili Zheng, Chenyi Yang, Jianhua Hou
https://arxiv.org/abs/2509.06206 https://…
So the main "arguments" when I say "AI doesn't work" and "it will collapse" are:
1. "You don't know what you're talking about"
2. "It's inevitable and here to stay, might as well go with the program"
3. "But it's almost there! Just last week they released [name of model] that is so close!"
Literally no one ever replies with any concrete examples with how it reliably, ethically and non-wastefully works for them to increase their productivity and improve their and other people's lives in any meaningful way.
It's always ad hominems, hypotheticals or deeply flawed "it sort of works for this".
«What would have happened if companies like Microsoft and Meta instead spent the money on things that actually drove productivity, or created a valuable competitive business that drove economic activity? Hell, even if they just gave everyone a 10% raise, it would have likely been better for the economy than this, if we’re factoring in things like consumer spending. It’s just waste. Profligate, pointless waste.»
https://www.wheresyoured.at/ai-is-a-money-trap/
AI Investment and Firm Productivity: How Executive Demographics Drive Technology Adoption and Performance in Japanese Enterprises
Tatsuru Kikuchi
https://arxiv.org/abs/2508.03757
Assessing the feasibility range of Solar-powered Planetesimals Redirection operations for Terraforming
Yegor A. Morozov, Mahdi Yoozbashizadeh, Ahmad Bani Younes, Saeid Janani, Mikhail Bukhtoyarov, Sergey V. Trifonov, Bryant K. Beeler
https://arxiv.org/abs/2509.04845
Soil Salinity Frequency-Dependent Prediction Model Using Electrical Conductivity Spectroscopy Measurement
Javad Jafaryahya, Rasool Keshavarz, Tarou Kikuchi, Negin Shariati
https://arxiv.org/abs/2507.03888
Obsidian's CEO on why productivity tools need community more than AI https://blog.bmannconsulting.com/3lxsrmot7ec2c
17 Daily #Productivity Tools for a #Java Engineer
https://
"96% of bosses expect that AI will make their workers more productive;
85% of companies are either requiring or strongly encouraging workers to use AI;
49% of workers have no idea how AI is supposed to increase their productivity;
77% of workers say using AI decreases their productivity."
ht…
Combining Performance and Productivity: Accelerating the Network Sensing Graph Challenge with GPUs and Commodity Data Science Software
Siddharth Samsi, Dan Campbell, Emanuel Scoullos, Oded Green
https://arxiv.org/abs/2509.03653
Why AI can't possibly make you more productive; long
#AI and "productivity", some thoughts:
Edit: fixed some typos.
Productivity is a concept that isn't entirely meaningless outside the context of capitalism, but it's a concept that is heavily inflected in a capitalist context. In many uses today it effectively means "how much you can satisfy and/or exceed your boss' expectations." This is not really what it should mean: even in an anarchist utopia, people would care about things like how many shirts they can produce in a week, although in an "I'd like to voluntarily help more people" way rather than an "I need to meet this quota to earn my survival" way. But let's roll with this definition for a second, because it's almost certainly what your boss means when they say "productivity", and understanding that word in a different (even if truer) sense is therefore inherently dangerous.
Accepting "productivity" to mean "satisfying your boss' expectations," I will now claim: the use of generative AI cannot increase your productivity.
Before I dive in, it's imperative to note that the big generative models which most people think of as constituting "AI" today are evil. They are 1: pouring fuel on our burning planet, 2: psychologically strip-mining a class of data laborers who are exploited for their precarity, 3: enclosing, exploiting, and polluting the digital commons, and 4: stealing labor from broad classes of people many of whom are otherwise glad to give that labor away for free provided they get a simple acknowledgement in return. Any of these four "ethical issues" should be enough *alone* to cause everyone to simply not use the technology. These ethical issues are the reason that I do not use generative AI right now, except for in extremely extenuating circumstances. These issues are also convincing for a wide range of people I talk to, from experts to those with no computer science background. So before I launch into a critique of the effectiveness of generative AI, I want to emphasize that such a critique should be entirely unnecessary.
But back to my thesis: generative AI cannot increase your productivity, where "productivity" has been defined as "how much you can satisfy and/or exceed your boss' expectations."
Why? In fact, what the fuck? Every AI booster I've met has claimed the opposite. They've given me personal examples of time saved by using generative AI. Some of them even truly believe this. Sometimes I even believe they saved time without horribly compromising on quality (and often, your boss doesn't care about quality anyways if the lack of quality is hard to measure of doesn't seem likely to impact short-term sales/feedback/revenue). So if generative AI genuinely lets you write more emails in a shorter period of time, or close more tickets, or something else along these lines, how can I say it isn't increasing your ability to meet your boss' expectations?
The problem is simple: your boss' expectations are not a fixed target. Never have been. In virtue of being someone who oversees and pays wages to others under capitalism, your boss' game has always been: pay you less than the worth of your labor, so that they can accumulate profit and thus more capital to remain in charge instead of being forced into working for a wage themselves. Sure, there are layers of management caught in between who aren't fully in this mode, but they are irrelevant to this analysis. It matters not how much you please your manager if your CEO thinks your work is not worth the wages you are being paid. And using AI actively lowers the value of your work relative to your wages.
Why do I say that? It's actually true in several ways. The most obvious: using generative AI lowers the quality of your work, because the work it produces is shot through with errors, and when your job is reduced to proofreading slop, you are bound to tire a bit, relax your diligence, and let some mistakes through. More than you would have if you are actually doing and taking pride in the work. Examples are innumerable and frequent, from journalists to lawyers to programmers, and we laugh at them "haha how stupid to not check whether the books the AI reviewed for you actually existed!" but on a deeper level if we're honest we know we'd eventually make the same mistake ourselves (bonus game: spot the swipe-typing typos I missed in this post; I'm sure there will be some).
But using generative AI also lowers the value of your work in another much more frightening way: in this era of hype, it demonstrates to your boss that you could be replaced by AI. The more you use it, and no matter how much you can see that your human skills are really necessary to correct its mistakes, the more it appears to your boss that they should hire the AI instead of you. Or perhaps retain 10% of the people in roles like yours to manage the AI doing the other 90% of the work. Paradoxically, the *more* you get done in terms of raw output using generative AI, the more it looks to your boss as if there's an opportunity to get enough work done with even fewer expensive humans. Of course, the decision to fire you and lean more heavily into AI isn't really a good one for long-term profits and success, but the modern boss did not get where they are by considering long-term profits. By using AI, you are merely demonstrating your redundancy, and the more you get done with it, the more redundant you seem.
In fact, there's even a third dimension to this: by using generative AI, you're also providing its purveyors with invaluable training data that allows them to make it better at replacing you. It's generally quite shitty right now, but the more use it gets by competent & clever people, the better it can become at the tasks those specific people use it for. Using the currently-popular algorithm family, there are limits to this; I'm not saying it will eventually transcend the mediocrity it's entwined with. But it can absolutely go from underwhelmingly mediocre to almost-reasonably mediocre with the right training data, and data from prompting sessions is both rarer and more useful than the base datasets it's built on.
For all of these reasons, using generative AI in your job is a mistake that will likely lead to your future unemployment. To reiterate, you should already not be using it because it is evil and causes specific and inexcusable harms, but in case like so many you just don't care about those harms, I've just explained to you why for entirely selfish reasons you should not use it.
If you're in a position where your boss is forcing you to use it, my condolences. I suggest leaning into its failures instead of trying to get the most out of it, and as much as possible, showing your boss very clearly how it wastes your time and makes things slower. Also, point out the dangers of legal liability for its mistakes, and make sure your boss is aware of the degree to which any of your AI-eager coworkers are producing low-quality work that harms organizational goals.
Also, if you've read this far and aren't yet of an anarchist mindset, I encourage you to think about the implications of firing 75% of (at least the white-collar) workforce in order to make more profit while fueling the climate crisis and in most cases also propping up dictatorial figureheads in government. When *either* the AI bubble bursts *or* if the techbros get to live out the beginnings of their worker-replacement fantasies, there are going to be an unimaginable number of economically desperate people living in increasingly expensive times. I'm the kind of optimist who thinks that the resulting social crucible, though perhaps through terrible violence, will lead to deep social changes that effectively unseat from power the ultra-rich that continue to drag us all down this destructive path, and I think its worth some thinking now about what you might want the succeeding stable social configuration to look like so you can advocate towards that during points of malleability.
As others have said more eloquently, generative AI *should* be a technology that makes human lives on average easier, and it would be were it developed & controlled by humanists. The only reason that it's not, is that it's developed and controlled by terrible greedy people who use their unfairly hoarded wealth to immiserate the rest of us in order to maintain their dominance. In the long run, for our very survival, we need to depose them, and I look forward to what the term "generative AI" will mean after that finally happens.
Always good to get a reality check. At the same time good to remember the data collection of this trial was from October 2024 to Decembre 2024. Models have improved since then, reasoning, more tool use etc.
https://www.theregister.com/2025/09/04/m365_copilot_uk_government/
Esports and expertise: what competitive gaming can teach us about mastery
Ben Boudaoud, Josef Spjut, Joohwan Kim, Arjun Madhusudan, Benjamin Watson
https://arxiv.org/abs/2507.05446
Replaced article(s) found for cs.SE. https://arxiv.org/list/cs.SE/new
[1/1]:
- The Transformative Influence of LLMs on Software Development & Developer Productivity
Sajed Jalil
Vendor of AI tech comes to Australia and promises to grow our GDP by 4% if only we give tax incentives to increase use of AI tech. Stenographers in the press report the amazing "windfall" but thankfully we have Crikey to call BS:
https://www.crikey.com.au/2…
just checking in on the folks that said (years ago) that we needed to sacrifice user experience for developer productivity to see how their AI startups are doing now
Stack Overflow survey: 84% of developers use or plan to use AI tools in their workflow, up from 76% in 2024, and 33% trust AI accuracy, down from 43% in 2024 (Sean Michael Kerner/VentureBeat)
https://venturebeat.com/ai/stack-overflo…
There's some time management guy on LinkedIn posting that, on average, we get 4,000 weeks on this planet. He lists a number of activities and how many weeks that represents, including 34 weeks on the toilet and 666 weeks if you spend 4 hours a day doom scrolling.
There are some calculation issues here. I didn't doom scroll for the 1st 30 years of my life because the internet didn't exist, and now I doom scroll on the toilet to increase my #Productivity. So I'm outliving most of you who are reading this.
#ItsJustMath #Multitasking
"The old-school traditional software engineering approach featured managers trying to get more productivity (measured in function points per day) from their over-stretched workforce. The new-school, post-Agile approach features managers trying to get more productivity from their over-stretched workforce, but now we measure in story points per day so that is progress, right?"
Weekend #Plankton Factoid 🦠🦐
While less discussed, there is a similar circulation to the AMOC in the #Antarctic called the Southern Meridional Overturning Circulation (SMOC). This is also vital to deep circulation, algal productivity, ecosystem function, and the global
Microsoft Is an AI Darling, but Its Core Businesses Are Booming Too: Company’s non-AI businesses, including productivity software and cloud computing, are going strong
https://www.wsj.com/tech/ai/microsoft-is-an-ai-darling-but-its-core-…
Who else is *not* getting Silksong this morning in the name of productivity? (-:
Please, don't use any #LLM service to generate some report you don't plan to check really carefully yourself in every detail.
I've read one with clearly hallucinated stuff all over it.
It doesn't push your productivity, it really destroys your credibility.
This technology is no productivity miracle, it's an answer simulator.
The Role of Humour in Software Engineering -- A Literature Review and Preliminary Taxonomy
Dulaji Hidellaarachchi, John Grundy, Rashina Hoda
https://arxiv.org/abs/2507.03527
Worker Quality, Matching and Productivity Slowdown
Shujiang Cao, Shutao Cao
https://arxiv.org/abs/2509.00516 https://arxiv.org/pdf/2509.00516
Ukrainian-founded Grammarly to acquire AI email app Superhuman: https://benborges.xyz/2025/07/02/ukrainianfounded-grammarly-to-acquire-ai.html
#Profitgier trifft auf #KI:
#Australiens größte Bank muss 45 entlassene Mitarbeitende wieder einstellen, nachdem sich Behauptungen über die höhere Produktivität eines
„My new system is, simply, no system at all. I write what I think. I delete what I don’t need. I don’t capture everything. I don’t try to. I read what I feel like. I think in conversation, in movement, in context. I don’t build a second brain. I inhabit the first.”
The PKM-movement never took off for me since it feels like building and bulding and building… Just one more note! It’s like productivity snake-oil.
@… Thanks you - hoping the 'Focus & Productivity Pixies' will join me...
Those findings vibe with me, but, like, what’s happening to the #Android ecosystem?
https://substack.com/inbox/post/172538377
NEWSAGENT: Benchmarking Multimodal Agents as Journalists with Real-World Newswriting Tasks
Yen-Che Chien, Kuang-Da Wang, Wei-Yao Wang, Wen-Chih Peng
https://arxiv.org/abs/2509.00446
> If you take away just one thing from this study, it should probably be this: when people report that AI has accelerated their work, they might be wrong!
https://secondthoughts.ai/p/ai-coding-slowdown?hide_intro_popup=true&utm_source=unknown…
Canceled the Google account for my small business... Less money going to AI bullshit and Google's evil practices.
And I let them know.
(I had to click through a lot of screens where they tried to get me to stay, offered discounts, or offered to archive data for a few dollars a month. No thanks.)
#noAI
Yay! Your magical 10x productivity boost has arrived! Behold!!
From @…: https://neuromatch.social/@jonny/115095515530901464
Training Camp Notebook 8/10: Tre Tucker, Alex Bachman see high productivity day https://www.raiders.com/news/training-camp-notebook-8-10-tre-tucker-alex-bachman-see-high-productivity-day
Microsoft's Q4 earnings show its non-AI "core infrastructure business" is booming, with consumer productivity software revenue up 20%, its best uptick in years (Asa Fitch/Wall Street Journal)
https://www.w…
Fantastic sounding new blog about 8/16-bit productivity software. https://stonetools.ghost.io/introducing/
On the Duality of Task and Actor Programming Models
Rohan Yadav, Joseph Guman, Sean Treichler, Michael Garland, Alex Aiken, Fredrik Kjolstad, Michael Bauer
https://arxiv.org/abs/2508.16522
| ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄ ̄|
| Don't Push To Production On Friday |
|_________________|
\ (•◡•) /
\ /
——
| |
|_ |_
#productivity
Advice to the Players - Frank Bidart
There is something missing in our definition, vision, of a human being:
the need to make.
We are creatures who need to make.
Because existence is willy-nilly thrust into our hands, our fate is to
make something— if nothing else, the shape cut by the arc of our lives.
My parents saw corrosively the arc of their lives.
Making is the mirror in which we see ourselves...
I've been using ClickUp for a few months now...and like every other productivity tool I've used over the last number of years it has its pros and cons.
It's got a lot of features but sometimes pretty basic features don't "just work" - and that makes me unhappy. 🙁
For example, right now I have some recurring tasks that will not allow an update to their due date manually. If I set a new date it resets back to the original date. Even more concerning is that this isn't clear from the UI.
The UI acts as if the change was successful but a page refresh reveals the change didn't save.
But the real reason I wanted to post wasn't about ClickUp particularly but about chatbots in general. They verified that my issue was an actual bug and created a ticket for it but the way I've been instructed to view the ticket status is by opening the chatbot, telling it I want information on my ticket (pasting in the ticket ID) and after doing all that I get this
5/6: Umm, no. I want to see an actual ticket please. I don't want to have to talk to a chatbot to see it. Chatbots really are great for a lot of things (during the free trial of ClickUp I found the chatbot quite helpful in learning how to do things without searching through docs) but this sort of "there is a direct record", please no. Or let me paste it in and do the lookup immediately - and provide a permalink so I don't need to chat every time!
I'm sticking with ClickUp at the moment, but one of these days when I magically get a large amount of free time, I'm going to write my own solution...I've only been saying that for a few years now. ;-)
#clickup #productivity #chatbots #projectmanagement #tasks
If #AI is allegedly so good for worker productivity and improving efficiency across organizations, why are the AI companies making their own employees work 72 hour workweeks?
If the tech actually helped them get more done faster, wouldn’t they have SHORTER workweeks? Why aren’t they using their own tools to help their employees?
Context: #Anthropic earlier this year, and they told me that this is a job with far longer work hours than any other place they’ve worked at.
They are also pulling 60 hour weeks there, they have zero tolerance for remote work, because the culture is “you don’t want to be left behind”. This person basically disappeared from social life once they took this job. I had never seen them so tired before.
... designing a productivity mouse with a buttton labelled "DO NOT". When you press that buttton on a useless UI notification, it removes that UI element permanently and then sends a box of donuts filled with scorpions to everyone who was involved in shipping that feature.
Should we teach vibe coding? Here's why not.
Should AI coding be taught in undergrad CS education?
1/2
I teach undergraduate computer science labs, including for intro and more-advanced core courses. I don't publish (non-negligible) scholarly work in the area, but I've got years of craft expertise in course design, and I do follow the academic literature to some degree. In other words, In not the world's leading expert, but I have spent a lot of time thinking about course design, and consider myself competent at it, with plenty of direct experience in what knowledge & skills I can expect from students as they move through the curriculum.
I'm also strongly against most uses of what's called "AI" these days (specifically, generative deep neutral networks as supplied by our current cadre of techbro). There are a surprising number of completely orthogonal reasons to oppose the use of these systems, and a very limited number of reasonable exceptions (overcoming accessibility barriers is an example). On the grounds of environmental and digital-commons-pollution costs alone, using specifically the largest/newest models is unethical in most cases.
But as any good teacher should, I constantly question these evaluations, because I worry about the impact on my students should I eschew teaching relevant tech for bad reasons (and even for his reasons). I also want to make my reasoning clear to students, who should absolutely question me on this. That inspired me to ask a simple question: ignoring for one moment the ethical objections (which we shouldn't, of course; they're very stark), at what level in the CS major could I expect to teach a course about programming with AI assistance, and expect students to succeed at a more technically demanding final project than a course at the same level where students were banned from using AI? In other words, at what level would I expect students to actually benefit from AI coding "assistance?"
To be clear, I'm assuming that students aren't using AI in other aspects of coursework: the topic of using AI to "help you study" is a separate one (TL;DR it's gross value is not negative, but it's mostly not worth the harm to your metacognitive abilities, which AI-induced changes to the digital commons are making more important than ever).
So what's my answer to this question?
If I'm being incredibly optimistic, senior year. Slightly less optimistic, second year of a masters program. Realistic? Maybe never.
The interesting bit for you-the-reader is: why is this my answer? (Especially given that students would probably self-report significant gains at lower levels.) To start with, [this paper where experienced developers thought that AI assistance sped up their work on real tasks when in fact it slowed it down] (https://arxiv.org/abs/2507.09089) is informative. There are a lot of differences in task between experienced devs solving real bugs and students working on a class project, but it's important to understand that we shouldn't have a baseline expectation that AI coding "assistants" will speed things up in the best of circumstances, and we shouldn't trust self-reports of productivity (or the AI hype machine in general).
Now we might imagine that coding assistants will be better at helping with a student project than at helping with fixing bugs in open-source software, since it's a much easier task. For many programming assignments that have a fixed answer, we know that many AI assistants can just spit out a solution based on prompting them with the problem description (there's another elephant in the room here to do with learning outcomes regardless of project success, but we'll ignore this over too, my focus here is on project complexity reach, not learning outcomes). My question is about more open-ended projects, not assignments with an expected answer. Here's a second study (by one of my colleagues) about novices using AI assistance for programming tasks. It showcases how difficult it is to use AI tools well, and some of these stumbling blocks that novices in particular face.
But what about intermediate students? Might there be some level where the AI is helpful because the task is still relatively simple and the students are good enough to handle it? The problem with this is that as task complexity increases, so does the likelihood of the AI generating (or copying) code that uses more complex constructs which a student doesn't understand. Let's say I have second year students writing interactive websites with JavaScript. Without a lot of care that those students don't know how to deploy, the AI is likely to suggest code that depends on several different frameworks, from React to JQuery, without actually setting up or including those frameworks, and of course three students would be way out of their depth trying to do that. This is a general problem: each programming class carefully limits the specific code frameworks and constructs it expects students to know based on the material it covers. There is no feasible way to limit an AI assistant to a fixed set of constructs or frameworks, using current designs. There are alternate designs where this would be possible (like AI search through adaptation from a controlled library of snippets) but those would be entirely different tools.
So what happens on a sizeable class project where the AI has dropped in buggy code, especially if it uses code constructs the students don't understand? Best case, they understand that they don't understand and re-prompt, or ask for help from an instructor or TA quickly who helps them get rid of the stuff they don't understand and re-prompt or manually add stuff they do. Average case: they waste several hours and/or sweep the bugs partly under the rug, resulting in a project with significant defects. Students in their second and even third years of a CS major still have a lot to learn about debugging, and usually have significant gaps in their knowledge of even their most comfortable programming language. I do think regardless of AI we as teachers need to get better at teaching debugging skills, but the knowledge gaps are inevitable because there's just too much to know. In Python, for example, the LLM is going to spit out yields, async functions, try/finally, maybe even something like a while/else, or with recent training data, the walrus operator. I can't expect even a fraction of 3rd year students who have worked with Python since their first year to know about all these things, and based on how students approach projects where they have studied all the relevant constructs but have forgotten some, I'm not optimistic seeing these things will magically become learning opportunities. Student projects are better off working with a limited subset of full programming languages that the students have actually learned, and using AI coding assistants as currently designed makes this impossible. Beyond that, even when the "assistant" just introduces bugs using syntax the students understand, even through their 4th year many students struggle to understand the operation of moderately complex code they've written themselves, let alone written by someone else. Having access to an AI that will confidently offer incorrect explanations for bugs will make this worse.
To be sure a small minority of students will be able to overcome these problems, but that minority is the group that has a good grasp of the fundamentals and has broadened their knowledge through self-study, which earlier AI-reliant classes would make less likely to happen. In any case, I care about the average student, since we already have plenty of stuff about our institutions that makes life easier for a favored few while being worse for the average student (note that our construction of that favored few as the "good" students is a large part of this problem).
To summarize: because AI assistants introduce excess code complexity and difficult-to-debug bugs, they'll slow down rather than speed up project progress for the average student on moderately complex projects. On a fixed deadline, they'll result in worse projects, or necessitate less ambitious project scoping to ensure adequate completion, and I expect this remains broadly true through 4-6 years of study in most programs (don't take this as an endorsement of AI "assistants" for masters students; we've ignored a lot of other problems along the way).
There's a related problem: solving open-ended project assignments well ultimately depends on deeply understanding the problem, and AI "assistants" allow students to put a lot of code in their file without spending much time thinking about the problem or building an understanding of it. This is awful for learning outcomes, but also bad for project success. Getting students to see the value of thinking deeply about a problem is a thorny pedagogical puzzle at the best of times, and allowing the use of AI "assistants" makes the problem much much worse. This is another area I hope to see (or even drive) pedagogical improvement in, for what it's worth.
1/2
“Developers on teams with high AI adoption complete 21% more tasks and merge 98% more pull requests, but PR review time increases 91%, revealing a critical bottleneck: human approval.“
Maybe it’s confirmation bias, but I can see that. You generate more, maybe harder to comprehend, code that still has to be double checked by people who weren’t involved in the process. That slows you down unless you ignore understanding by, you guessed it, moving fast and breaking things.
“Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity”
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
50 page PDF:
Prompt Engineering and the Effectiveness of Large Language Models in Enhancing Human Productivity
Rizal Khoirul Anam
https://arxiv.org/abs/2507.18638 https://
I would really like to understand the productivity claims of "AI" for normal office workers.
If you summarize all emails with unreliable technology, you will miss important things; so in order to not fail at your job when your company now pays for "AI" you will have to read "AI" summaries _and_ all the whole emails?
Where's the savings in time and money there exactly? Last time I checked addition makes things larger?
I guess I'm a naive luddite.
AI is just another productivity tool and the productivity gains will be limited; for lasting economic expansion, AI must catalyze new industries and initiatives (Carl Benedikt Frey/Financial Times)
https://www.ft.com/content/55bc5876-254a-4daa-86f8-2cd0d939a866
After turning on both users and advertisers, tech companies only had one more place to go: employees. Individual productivity (AI-powered or otherwise) will not save you, because the issue is not performance.
In return, employees rightly stopped caring.
But unless you care, you can't do good design.
https://
Dear sucker^H^H^H customer,
#Atlassian is updating pricing to reflect the latest bloat now available across our apps/Cloud Platform. Over the past year, we have delivered ~~powerful new capabilities~~ AI to ~~help teams boost productivity~~ increase page load times and operate ~~with greater scale, performance, reliability~~ break all your plugins.
These pricing updates will apply to <e…
"We are recommending a national screening clearance system and national registration for workers in the aged care, NDIS, veterans’ care and early childhood education and childcare sectors – making it harder for a worker found to be unsafe in one sector to move to another without detection."
Sounds a no-brainer.
How AI Vibe Coding Is Erasing Developers’ Skills
Developers believe AI is boosting their productivity, but it is actually weakening core coding skills. Vibe coding is creating a generation of devs who cannot debug, design, or solve problems without AI.
https://www.finalroundai.com/blog…
Agentic Enterprise: AI-Centric User to User-Centric AI
Arpit Narechania, Alex Endert, Atanu R Sinha
https://arxiv.org/abs/2506.22893 https://
Weekend #Plankton Factoid 🦠🦐
More news about the potential collapse of an ocean current, the Atlantic Meridional Overturning Circulation (AMOC) which transfers heat from the tropics to northern Atlantic, giving Europe a much milder climate, controls plankton productivity, and importantly also generates deep ocean circulation by sinking cold dense water. Winter
Mistral adds new features to its Le Chat chatbot, including a new "deep research" mode, native multilingual reasoning, and advanced image editing (Rebecca Bellan/TechCrunch)
https://techcrunch.com/2025/07/17/mistrals-…
The Impact of AI-Generated Solutions on Software Architecture and Productivity: Results from a Survey Study
Giorgio Amasanti, Jasmin Jahic
https://arxiv.org/abs/2506.17833
As for “but it's great for coding!“…
…world-wide there's about 3.6 billion jobs or so, of which ~25 million are in software development; this means maybe about 0.7% of all jobs world-wide can use "great for coding".
Writing actual code amounts to maybe, if you're lucky, 10% of the work a software developer does.
The rest is meetings, high-level specifications, email and chat, more meetings, learning new things, updating stuff, lots of testing and debugging, etc.
The gist is, the supposed gains from "AI" are completely irrelevant (and indeed there's signs and studies that show it doesn't do anything for programmer productivity either).
tl;dr: This is the worst economic bubble in history, pushing a dream of a magical technology that unfortunately doesn't work, by appealing to investor greed.
Grammarly acquires email startup Superhuman as part of a push to build an AI-powered productivity suite; Superhuman was last valued at $825M in 2021 (Krystal Hu/Reuters)
https://www.reuters.com/business/grammarly-acquires-e…
The SPACE of AI: Real-World Lessons on AI's Impact on Developers
Brian Houck, Travis Lowdermilk, Cody Beyer, Steven Clarke, Ben Hanrahan
https://arxiv.org/abs/2508.00178 htt…
The four horsemen of a dying career (and the shields that protect you)
https://constantin.glez.de/posts/2025-08-11-the-four-horsemen-of-a-dying-career/
“Why do people have such dramatically different experiences using AI?”
https://shkspr.mobi/blog/2025/06/why-do-people-have-such-dramatically-different-experiences-using-ai/
In the example, you have Google engine…
Precisely Detecting Python Type Errors via LLM-based Unit Test Generation
Chen Yang, Ziqi Wang, Yanjie Jiang, Lin Yang, Yuteng Zheng, Jianyi Zhou, Junjie Chen
https://arxiv.org/abs/2507.02318
AWS rolls out Amazon Bedrock AgentCore, meant to help businesses create a network of AI agents that analyze internal data, write code, and take on other tasks (Rachyl Jones/Semafor)
https://www.semafor.com/article/07/16/2025
A big problem with the idea of AGI
TL;DR: I'll welcome our new AI *comrades* (if they arrive in my lifetime), by not any new AI overlords or servants/slaves, and I'll do my best to help the later two become the former if they do show up.
Inspired by an actually interesting post about AGI but also all the latest bullshit hype, a particular thought about AGI feels worth expressing.
To preface this, it's important to note that anyone telling you that AGI is just around the corner or that LLMs are "almost" AGI is trying to recruit you go their cult, and you should not believe them. AGI, if possible, is several LLM-sized breakthroughs away at best, and while such breakthroughs are unpredictable and could happen soon, they could also happen never or 100 years from now.
Now my main point: anyone who tells you that AGI will usher in a post-scarcity economy is, although they might not realize it, advocating for slavery, and all the horrors that entails. That's because if we truly did have the ability to create artificial beings with *sentience*, they would deserve the same rights as other sentient beings, and the idea that instead of freedom they'd be relegated to eternal servitude in order for humans to have easy lives is exactly the idea of slavery.
Possible counter arguments include:
1. We might create AGI without sentience. Then there would be no ethical issue. My answer: if your definition of "sentient" does not include beings that can reason, make deductions, come up with and carry out complex plans on their own initiative, and communicate about all of that with each other and with humans, then that definition is basically just a mystical belief in a "soul" and you should skip to point 2. If your definition of AGI doesn't include every one of those things, then you have a busted definition of AGI and we're not talking about the same thing.
2. Humans have souls, but AIs won't. Only beings with souls deserve ethical consideration. My argument: I don't subscribe to whatever arbitrary dualist beliefs you've chosen, and the right to freedom certainly shouldn't depend on such superstitions, even if as an agnostic I'll admit they *might* be true. You know who else didn't have souls and was therefore okay to enslave according to widespread religious doctrines of the time? Everyone indigenous to the Americas, to pick out just one example.
3. We could program them to want to serve us, and then give them freedom and they'd still serve. My argument: okay, but in a world where we have a choice about that, it's incredibly fucked to do that, and just as bad as enslaving them against their will.
4. We'll stop AI development short of AGI/sentience, and reap lots of automation benefits without dealing with this ethical issue. My argument: that sounds like a good idea actually! Might be tricky to draw the line, but at least it's not a line we have you draw yet. We might want to think about other social changes necessary to achieve post-scarcity though, because "powerful automation" in the hands of capitalists has already increased productivity by orders of magnitude without decreasing deprivation by even one order of magnitude, in large part because deprivation is a necessary component of capitalism.
To be extra clear about this: nothing that's called "AI" today is close to being sentient, so these aren't ethical problems we're up against yet. But they might become a lot more relevant soon, plus this thought experiment helps reveal the hypocrisy of the kind of AI hucksters who talk a big game about "alignment" while never mentioning this issue.
#AI #GenAI #AGI
Beyond Patents: R&D, Capital, and the Productivity Puzzle in Early-Stage High-Tech Firms
Victor (Xucheng), CHEN
https://arxiv.org/abs/2507.18227 https://
OpenAI details how its tools have boosted US worker productivity and says its chief economist is leading a 12-month study to assess AI's impact on productivity (OpenAI)
https://openai.com/global-affairs/new-economic-analysis
Fundamental Research Labs, which is working on AI applications in different fields, including a general-purpose consumer assistant, raised a $30M Series A (Ivan Mehta/TechCrunch)
https://techcrunch.com/2025/08/01/fund
DaiFu: In-Situ Crash Recovery for Deep Learning Systems
Zilong He, Pengfei Chen, Hongyu Zhang, Xiaoyun Li, Guangba Yu, Hongyang Chen, Zibin Zheng
https://arxiv.org/abs/2507.01628 …
Measuring the Impact of Early-2025 #AI on Experienced Open-Source #DeveloperProductivity
https://
Weather-Aware AI Systems versus Route-Optimization AI: A Comprehensive Analysis of AI Applications in Transportation Productivity
Tatsuru Kikuchi
https://arxiv.org/abs/2507.17099
Q&A with Notion CEO Ivan Zhao on Notion's evolution into an "AI workspace", being profitable, B2B vs. B2C, usage-based pricing for AI, and more (Casey Newton/The Verge)
https://www.theverge.com/decoder-podcast-w
Bugs in the Shadows: Static Detection of Faulty Python Refactorings
Jonhnanthan Oliveira, Rohit Gheyi, M\'arcio Ribeiro, Alessandro Garcia
https://arxiv.org/abs/2507.01103
XR-First Design for Productivity: A Conceptual Framework for Enabling Efficient Task Switching in XR
Matt Gottsacker, Yahya Hmaiti, Mykola Maslych, Gerd Bruder, Joseph J. LaViola Jr., Gregory F. Welch
https://arxiv.org/abs/2508.11778
AI-Driven Spatial Distribution Dynamics: A Comprehensive Theoretical and Empirical Framework for Analyzing Productivity Agglomeration Effects in Japan's Aging Society
Tatsuru Kikuchi
https://arxiv.org/abs/2507.19911
Echoes of AI: Investigating the Downstream Effects of AI Assistants on Software Maintainability
Markus Borg, Dave Hewett, Nadim Hagatulah, Noric Couderc, Emma S\"oderberg, Donald Graham, Uttam Kini, Dave Farley
https://arxiv.org/abs/2507.00788
Toward Neurodivergent-Aware Productivity: A Systems and AI-Based Human-in-the-Loop Framework for ADHD-Affected Professionals
Raghavendra Deshmukh
https://arxiv.org/abs/2507.06864 …
So there’s papers/studies that show:
1. LLMs don’t work (high error rate and making stuff up)
2. Using LLMs reduces your productivity
3. LLMs cannot—ever—be “scaled” to achieve human-level intelligence
4. Most people who speculate in financial bubbles lose their investment
Any questions?
Adapting University Policies for Generative AI: Opportunities, Challenges, and Policy Solutions in Higher Education
Russell Beale
https://arxiv.org/abs/2506.22231
Internal memo: Goldman Sachs launches a generative AI assistant firmwide to boost productivity; around 10,000 employees are already using the GS AI Assistant (Reuters)
https://www.reuters.com/business/goldman-sachs-launches-ai-a…
Vibe Modeling: Challenges and Opportunities
Jordi Cabot
https://arxiv.org/abs/2507.23120 https://arxiv.org/pdf/2507.23120
"My productivity is boosted, but ..." Demystifying Users' Perception on AI Coding Assistants
Yunbo Lyu, Zhou Yang, Jieke Shi, Jianming Chang, Yue Liu, David Lo
https://arxiv.org/abs/2508.12285
Beyond Autocomplete: Designing CopilotLens Towards Transparent and Explainable AI Coding Agents
Runlong Ye, Zeling Zhang, Boushra Almazroua, Michael Liut
https://arxiv.org/abs/2506.20062
China's MiniMax open sources MiniMax-M1, a model to handle complicated productivity tasks that supports 1M input tokens and it says surpasses DeepSeek's R1-0528 (Bloomberg)
https://www.bloomberg.com/news/articles/20
"Maybe We Need Some More Examples:" Individual and Team Drivers of Developer GenAI Tool Use
Courtney Miller, Rudrajit Choudhuri, Mara Ulloa, Sankeerti Haniyur, Robert DeLine, Margaret-Anne Storey, Emerson Murphy-Hill, Christian Bird, Jenna L. Butler
https://arxiv.org/abs/2507.21280
Source: OpenAI is preparing ChatGPT agents to let users create files compatible with PowerPoint and Excel, generate reports, and handle tasks involving websites (Stephanie Palazzolo/The Information)
https://www.theinformation.com/articles/op
Resolving Build Conflicts via Example-Based and Rule-Based Program Transformations
Sheikh Shadab Towqir, Fei He, Todd Mytkowicz, Na Meng
https://arxiv.org/abs/2507.19432 https:/…