Not interested in a debate on what ways are better or worse but it seems to me that @TheASF has a fair bit of material on "The Apache Way" which one could approach as a How-To even for projects that are not under the ASF or any other domiciling organization. It's not perfect but I'm not aware of any ASF project having the sort of nasty takeover like the Ruby affair.
OTOH, it may require the "hippie" vibe of ASF focused on consensus and community over code t…
The Jet Origin of the Mid-infrared Excess in the Black Hole V404 Cygni in Quiescence
E. S. Borowski, R. I. Hynes, Q. Hunt, A. J. Tetarenko, R. M. Plotkin, T. Shahbaz, P. Gandhi, T. J. Maccarone, J. C. A. Miller-Jones, C. O. Heinke, A. W. Shaw, T. D. Russell, G. R. Sivakoff, P. A. Charles, E. V. Palaiologou, P. Reig
https://arxiv…
A Methodological Study on Data Representation for Machine Learning Modelling of Thermal Conductivity of Rare-Earth Oxides
Amiya Chowdhury, Acacio Rinc\'on Romero, Eduardo Aguilar-Bejarano, Halar Memon, Grazziela Figueredo, Tanvir Hussain
https://arxiv.org/abs/2509.18951
Graphene Frontiers: Recent Advancements in Energy and Electronics Applications
Abdallah M. Abdeldaiem, Abdulrhman M. Alaraj, Ahmed K. Abozaid, Habiba E. Elsayegh, Mohamed A. Khamis, Mohamed M. Kedra, Mahmoud A. Elqassas, Ahmed M. Dowidar, Aya A. Esmaeil, Nora H. El mowafy, Fayza R. Ramadan, Walid J. Hamouda, Sara R. Ghazal, Haneen A. Saad, Naglaa M. Zian, Fatma Sameh, Alshimaa M. Rizk, Mena K. Selema, Aml F. Dawood, Ebrahem H. Abdelaal, Walid Ismail, Mahmoud Abdelfatah, Swellam W. Shar…
Congratulations to Akhil Thomas for his succesful PhD defense today in Freiburg with a thesis on Data-driven analysis of microstructure-sensitive fatigue damage initiation! 🎉🥂🙌
#pmd #materialsscience #ai
Series D, Episode 03 - Traitor
LEITZ: Because my meeting was with Hunda, General, and two off-worlders. [Pours a glass of wine for himself. His manner is insolent]
GENERAL: Hunda is a rebel leader.
https://blake.torpidity.net/m/403/352 B7B3
Back to poking at the solder cross section test sample.
Initial findings:
1) I should have sanded them more by hand prior to embedding. There's a LOT of material to burn through. I've probably worn out 5-10 P180 sanding disks so far and am not even at the PTHs on the main region of interest yet.
2) The samples are quite a bit off level. I need to figure out a better way to fixture them in the embedding mold so they don't tip forward.
I'll keep on polis…
AI, AGI, and learning efficiency
An addendum to this: I'm someone who would accurately be called "anti-AI" in the modern age, yet I'm also an "AI researcher" in some ways (have only dabbled in neutral nets).
I don't like:
- AI systems that are the product of labor abuses towards the data workers who curate their training corpora.
- AI systems that use inordinate amounts of water and energy during an intensifying climate catastrophe.
- AI systems that are fundamentally untrustworthy and which reinforce and amplify human biases, *especially* when those systems are exposed in a way that invites harms.
- AI systems which are designed to "save" my attention or brain bandwidth but such my doing so cripple my understating of the things I might use them for when I fact that understanding was the thing I was supposed to be using my time to gain, and where the later lack of such understanding will be costly to me.
- AI systems that are designed by and whose hype fattens the purse of people who materially support genocide and the construction of concentration campus (a.k.a. fascists).
In other words, I do not like and except in very extenuating circumstances I will not use ChatGPT, Claude, Copilot, Gemini, etc.
On the other hand, I do like:
- AI research as an endeavor to discover new technologies.
- Generative AI as a research topic using a spectrum of different methods.
- Speculating about non-human intelligences, including artificial ones, and including how to behave ethically towards them.
- Large language models as a specific technique, and autoencoders and other neural networks, assuming they're used responsibly in terms of both resource costs & presentation to end users.
I write this because I think some people (especially folks without CS backgrounds) may feel that opposing AI for all the harms it's causing runs the risk of opposing technological innovation more broadly, and/or may feel there's a risk that they will be "left behind" as everyone else embraces the hype and these technologies inevitability become ubiquitous and essential (I know I feel this way sometimes). Just know that is entirely possible and logically consistent to both oppose many forms of modern AI while also embracing and even being optimistic about AI research, and that while LLMs are currently all the rage, they're not the endpoint of what AI will look like in the future, and their downsides are not inherent in AI development.
Hamiltonian parameter inference from RIXS spectra with active learning
Marton K. Lajer, Xin Dai, Kipton Barros, Matthew R. Carbone, S. Johnston, M. P. M. Dean
https://arxiv.org/abs/2507.16021
Stability by Design: Atomistic Insights into Hydrolysis-Driven MOF Degradation
Ashok Yacham, Tarak K. Patra, Jithin John Varghese, Richa Sharma
https://arxiv.org/abs/2507.16197