X is testing using Community Notes to highlight posts that are liked by users with different perspectives (Sarah Perez/TechCrunch)
https://techcrunch.com/2025/07/24/x-to-test-using-community-notes-to-find-the-posts-everyone-likes/
There's a woman I know who, when she was pregnant, was very keen to hear the opinions of crystal diviners and homeopath medics on what sex her new baby would be but wouldn't let the ultrasound-scan technician that actually knows tells her because Spoilers.
On that note, I'm happy to watch #doctorWho #badWolf #tv
dbtropes_feature: Artistic works and their tropes
A bipartite network of artistic works (movies, novels, etc.) and their tropes (stylistic conventions or devices), as extracted from tvtropes.org. The date of this snapshot is uncertain.
This network has 152093 nodes and 3232134 edges.
Tags: Informational, Relatedness, Unweighted
https…
Computational Design of Two-Dimensional MoSi$_2$N$_4$ Family Field-Effect Transistor for Future \AA ngstr\"om-Scale CMOS Technology Nodes
Che Chen Tho, Zongmeng Yang, Shibo Fang, Shiying Guo, Liemao Cao, Chit Siong Lau, Fei Liu, Shengli Zhang, Jing Lu, L. K. Ang, Lain-Jong Li, Yee Sin Ang
https://arxiv.org/abs/2506.21366
Condensed Representation of RDF and its Application on Graph Versioning
Jey Puget Gil, Emmanuel Coquery, John Samuel, Gilles Gesquiere
https://arxiv.org/abs/2506.21203
This week's ISE 2025 lecture was focussed on artificial neural networks. In particular, we were discussing how to get rid of manual feature engineering and doing representation learning from raw data with convolutional neural networks.
#AI #ArtificialNeuralNetworks
Noted while reading: 'a data structure or a block of code are things that make implicit and subjective arguments about how to see the world. This is possibly the single most important basic insight that Digital Humanities as a field needs to impart, because it affects so much of the world around us' - excellent post by @…
It may appear to be looking empty and quiet at the MPS Göttingen right now - but we have been bustling about and are very busy getting everything ready for our visitors at the Night of Science tomorrow! Care for a few sneak previews in today's thread?
@…
#ndwgoe
AI, AGI, and learning efficiency
An addendum to this: I'm someone who would accurately be called "anti-AI" in the modern age, yet I'm also an "AI researcher" in some ways (have only dabbled in neutral nets).
I don't like:
- AI systems that are the product of labor abuses towards the data workers who curate their training corpora.
- AI systems that use inordinate amounts of water and energy during an intensifying climate catastrophe.
- AI systems that are fundamentally untrustworthy and which reinforce and amplify human biases, *especially* when those systems are exposed in a way that invites harms.
- AI systems which are designed to "save" my attention or brain bandwidth but such my doing so cripple my understating of the things I might use them for when I fact that understanding was the thing I was supposed to be using my time to gain, and where the later lack of such understanding will be costly to me.
- AI systems that are designed by and whose hype fattens the purse of people who materially support genocide and the construction of concentration campus (a.k.a. fascists).
In other words, I do not like and except in very extenuating circumstances I will not use ChatGPT, Claude, Copilot, Gemini, etc.
On the other hand, I do like:
- AI research as an endeavor to discover new technologies.
- Generative AI as a research topic using a spectrum of different methods.
- Speculating about non-human intelligences, including artificial ones, and including how to behave ethically towards them.
- Large language models as a specific technique, and autoencoders and other neural networks, assuming they're used responsibly in terms of both resource costs & presentation to end users.
I write this because I think some people (especially folks without CS backgrounds) may feel that opposing AI for all the harms it's causing runs the risk of opposing technological innovation more broadly, and/or may feel there's a risk that they will be "left behind" as everyone else embraces the hype and these technologies inevitability become ubiquitous and essential (I know I feel this way sometimes). Just know that is entirely possible and logically consistent to both oppose many forms of modern AI while also embracing and even being optimistic about AI research, and that while LLMs are currently all the rage, they're not the endpoint of what AI will look like in the future, and their downsides are not inherent in AI development.