physics_collab: Multilayer physicist collaborations (2015)
Two multiplex networks of coauthorships among the Pierre Auger Collaboration of physicists (2010-2012) and among researchers who have posted preprints on arXiv.org (all papers up to May 2014). Layers represent different categories of publication, and an edge's weight indicates the number of reports written by the authors. These layers are one-mode projections from the underlying author-paper bipartite network.
This n…
LLM-based speaker diarization correction: A generalizable approach
Georgios Efstathiadis, Vijay Yadav, Anzar Abbas
https://arxiv.org/abs/2406.04927 https:/…
This cloud architecture is fascinating.
1. Secure the hardware supply chain.
2. Rely on secure boot and the trust cache to assure loaded software.
3. Remove code that provide access in and out of the runtime.
4. Rely on per-boot on-disk encryption keys to secure data on Flash.
5. Generate per-node public/private key pairs for data exchange with client.
6. Rely on Secure Enclave to hide private keys from runtime.
7. Rely on 3rd party load balancers to batch…
Subtoot 'cause the post I saw had enough replies already, but this is a bad article (though it has a wonderful description of how Von Neumann machines work):
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
For context, I'm against most uses of modern LLMs for several good ethical reasons, and I think the current state of AI research funding is both unsustainable and harmful to knowledge development. However, I've done a tiny bit of deep learning research myself, and I think the tech has a lot of cool potential, even if on balance it might have even more terrifying-potential.
The central problem with this article is that while it accurately describes ways that most human brains differ fundamentally from one way computers can be set up, it completely ignores how (computer) neutral networks work, including the fact that they'd perform very similar to the humans on the dollar bill task, because they encode a representation of their training inputs as distributed tweaks to the connection weights of many simulated neurons. (Also, people with photographic memory do exist...)
I think that being challenged in one's metaphors is a great idea (read Paul Agre on AI) and this is a useful article to have read for that reason, but I think the more useful stance is a principled agnosticism towards whether the human brain works like a computer, along with a broader imagination for "what a computer works like." More specifically, I'm quite convinced the brain doesn't work like a modern operating system (effectively the central straw man in this article), but I reserve judgement on whether it works like a neutral network.
#Birders have spoken, and I agree, visual confirmation is also a really important part of the process.
https://dotnet.social/@poppastring/112593804274982391
Tenhle díl „Dvacet let v Bruselu” od Přepište dějiny je moc pěkný
https://podcasters.spotify.com/pod/show/prepistedejiny/episodes/Dvacet-let-v-Bruselu-e2kuqf7/a-abc4a0t
Processing, evaluating and understanding FMRI data with afni_proc.py
Richard C. Reynolds, Daniel R. Glen, Gang Chen, Ziad S. Saad, Robert W. Cox, Paul A. Taylor
https://arxiv.org/abs/2406.05248
One-pass Multiple Conformer and Foundation Speech Systems Compression and Quantization Using An All-in-one Neural Model
Zhaoqing Li, Haoning Xu, Tianzi Wang, Shoukang Hu, Zengrui Jin, Shujie Hu, Jiajun Deng, Mingyu Cui, Mengzhe Geng, Xunying Liu
https://arxiv.org/abs/2406.10160
I'm no fan of Max Karson, but when he interviewed Thaddeus Russel he laid out precisely why the "I'd kill any man who touched my daughter" is total bullshit.
and Thad had to agree that he was right.
This https://arxiv.org/abs/2406.05248 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_qbi…