Just saw this:
#AI can mean a lot of things these days, but lots of the popular meanings imply a bevy of harms that I definitely wouldn't feel are worth a cute fish game. In fact, these harms are so acute that even "just" playing into the AI hype becomes its own kind of harm (it's similar to blockchain in that way).
@… noticed that the authors claim the code base is 80% AI generated, which is a red flag because people with sound moral compasses wouldn't be using AI to "help" write code in the first place. The authors aren't by some miracle people who couldn't build this app without help, in case that influences your thinking about it: they have the skills to write the code themselves, although it likely would have taken longer (but also been better).
I was more interested in the fish-classification AI, and how much it might be dependent on datacenters. Thankfully, a quick glance at the code confirms they're using ONNX and running a self-trained neural network on your device. While the exponentially-increasing energy & water demands of datacenters to support billion-parameter models are a real concern, this is not that. Even a non-AI game can burn a lot of cycles on someone's phone, and I don't think there's anything to complain about energy-wise if we're just using cycles on the end user's device as long as we're not having them keep it on for hours crunching numbers like blockchain stuff does. Running whatever stuff locally while the user is playing a game is a negligible environmental concern, unlike, say, calling out to ChatGPT where you're directly feeding datacenter demand. Since they claimed to have trained the network themselves, and since it's actually totally reasonable to make your own dataset for this and get good-enough-for-a-silly-game results with just a few hundred examples, I don't have any ethical objections to the data sourcing or training processes either. Hooray! This is finally an example of "ethical use of neutral networks" that I can hold up as an example of what people should be doing instead of the BS they are doing.
But wait... Remember what I said about feeding the AI hype being its own form of harm? Yeah, between using AI tools for coding and calling their classifier "AI" in a way that makes it seem like the same kind of thing as ChatGPT et al., they're leaning into the hype rather than helping restrain it. And that means they're causing harm. Big AI companies can point to them and say "look AI enables cute things you like" when AI didn't actually enable it. So I'm feeling meh about this cute game and won't be sharing it aside from this post. If you love the cute fish, you don't really have to feel bad for playing with it, but I'd feel bad for advertising it without a disclaimer.
Time-Aware One Step Diffusion Network for Real-World Image Super-Resolution
Tainyi Zhang, Zheng-Peng Duan, Peng-Tao Jiang, Bo Li, Ming-Ming Cheng, Chun-Le Guo, Chongyi Li
https://arxiv.org/abs/2508.16557
Full-spectrum modeling of mobile gamma-ray spectrometry systems in scattering media
David Breitenmoser, Alberto Stabilini, Sabine Mayer
https://arxiv.org/abs/2506.17820
TransLight: Image-Guided Customized Lighting Control with Generative Decoupling
Zongming Li, Lianghui Zhu, Haocheng Shen, Longjin Ran, Wenyu Liu, Xinggang Wang
https://arxiv.org/abs/2508.14814
Development and testing of integrated readout electronics for next generation SiSeRO (Single electron Sensitive Read Out) devices
Tanmoy Chattopadhyay, Haley R. Stueber, Abigail Y. Pan, Sven Herrmann, Peter Orel, Kevan Donlon, Steven W. Allen, Marshall W. Bautz, Michael Cooper, Catherine E. Grant, Beverly LaMarr, Christopher Leitz, Andrew Malonis, Eric D. Miller, R. Glenn Morris, Gregory Prigozhin, Ilya Prigozhin, Artem Poliszczuk, Keith Warner, Daniel R. Wilkins
A Fast, Reliable, and Secure Programming Language for LLM Agents with Code Actions
Stephen Mell, Botong Zhang, David Mell, Shuo Li, Ramya Ramalingam, Nathan Yu, Steve Zdancewic, Osbert Bastani
https://arxiv.org/abs/2506.12202
Tree-Based Text Retrieval via Hierarchical Clustering in RAGFrameworks: Application on Taiwanese Regulations
Chia-Heng Yu, Yen-Lung Tsai
https://arxiv.org/abs/2506.13607
NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale
NextStep Team, Chunrui Han, Guopeng Li, Jingwei Wu, Quan Sun, Yan Cai, Yuang Peng, Zheng Ge, Deyu Zhou, Haomiao Tang, Hongyu Zhou, Kenkun Liu, Ailin Huang, Bin Wang, Changxin Miao, Deshan Sun, En Yu, Fukun Yin, Gang Yu, Hao Nie, Haoran Lv, Hanpeng Hu, Jia Wang, Jian Zhou, Jianjian Sun, Kaijun Tan, Kang An, Kangheng Lin, Liang Zhao, Mei Chen, Peng Xing, Rui Wang, Shiyu Liu, Shutao Xia, Tianhao You, Wei Ji…
Reward Models Enable Scalable Code Verification by Trading Accuracy for Throughput
Gabriel Orlanski, Nicholas Roberts, Aws Albarghouthi, Frederic Sala
https://arxiv.org/abs/2506.10056
TTS-CtrlNet: Time varying emotion aligned text-to-speech generation with ControlNet
Jaeseok Jeong, Yuna Lee, Mingi Kwon, Youngjung Uh
https://arxiv.org/abs/2507.04349