Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@netzschleuder@social.skewed.de
2024-06-08 06:00:05

physics_collab: Multilayer physicist collaborations (2015)
Two multiplex networks of coauthorships among the Pierre Auger Collaboration of physicists (2010-2012) and among researchers who have posted preprints on arXiv.org (all papers up to May 2014). Layers represent different categories of publication, and an edge's weight indicates the number of reports written by the authors. These layers are one-mode projections from the underlying author-paper bipartite network.
This n…

physics_collab: Multilayer physicist collaborations (2015). 514 nodes, 7153 edges. https://networks.skewed.de/net/physics_collab#pierreAuger
@arXiv_eessAS_bot@mastoxiv.page
2024-06-10 06:56:20

LLM-based speaker diarization correction: A generalizable approach
Georgios Efstathiadis, Vijay Yadav, Anzar Abbas
arxiv.org/abs/2406.04927

@drahardja@sfba.social
2024-06-11 03:01:37

This cloud architecture is fascinating.
1. Secure the hardware supply chain.
2. Rely on secure boot and the trust cache to assure loaded software.
3. Remove code that provide access in and out of the runtime.
4. Rely on per-boot on-disk encryption keys to secure data on Flash.
5. Generate per-node public/private key pairs for data exchange with client.
6. Rely on Secure Enclave to hide private keys from runtime.
7. Rely on 3rd party load balancers to batch…

@tiotasram@kolektiva.social
2024-06-12 11:40:32

Subtoot 'cause the post I saw had enough replies already, but this is a bad article (though it has a wonderful description of how Von Neumann machines work):
aeon.co/essays/your-brain-does
For context, I'm against most uses of modern LLMs for several good ethical reasons, and I think the current state of AI research funding is both unsustainable and harmful to knowledge development. However, I've done a tiny bit of deep learning research myself, and I think the tech has a lot of cool potential, even if on balance it might have even more terrifying-potential.
The central problem with this article is that while it accurately describes ways that most human brains differ fundamentally from one way computers can be set up, it completely ignores how (computer) neutral networks work, including the fact that they'd perform very similar to the humans on the dollar bill task, because they encode a representation of their training inputs as distributed tweaks to the connection weights of many simulated neurons. (Also, people with photographic memory do exist...)
I think that being challenged in one's metaphors is a great idea (read Paul Agre on AI) and this is a useful article to have read for that reason, but I think the more useful stance is a principled agnosticism towards whether the human brain works like a computer, along with a broader imagination for "what a computer works like." More specifically, I'm quite convinced the brain doesn't work like a modern operating system (effectively the central straw man in this article), but I reserve judgement on whether it works like a neutral network.

@poppastring@dotnet.social
2024-06-11 19:46:44

#Birders have spoken, and I agree, visual confirmation is also a really important part of the process.
dotnet.social/@poppastring/112

@jaandrle@fosstodon.org
2024-06-17 13:40:44

Tenhle díl „Dvacet let v Bruselu” od Přepište dějiny je moc pěkný
podcasters.spotify.com/pod/sho

Přepište dějiny o češích „ty starty my máme skvělé … akorát ten běžec vyběhne a po dvou metrech řekne ‚to stejně nemá cenu, se podívejte jak běží ti ostatní … stejně to podfukem vyhrál tenhle dopoval’”
Přepište dějiny o sloganu „vracíme se do evropy”: vracet se můžeme k něčemu minulému … ten evropský projekt mezitím žije … znamená se přidat k té evropě, přidat se do budoucnosti… musíme se shodnout, že chceme dopředu
@arXiv_qbioNC_bot@mastoxiv.page
2024-06-11 07:08:30

Processing, evaluating and understanding FMRI data with afni_proc.py
Richard C. Reynolds, Daniel R. Glen, Gang Chen, Ziad S. Saad, Robert W. Cox, Paul A. Taylor
arxiv.org/abs/2406.05248

@arXiv_csSD_bot@mastoxiv.page
2024-06-17 07:26:14

One-pass Multiple Conformer and Foundation Speech Systems Compression and Quantization Using An All-in-one Neural Model
Zhaoqing Li, Haoning Xu, Tianzi Wang, Shoukang Hu, Zengrui Jin, Shujie Hu, Jiajun Deng, Mingyu Cui, Mengzhe Geng, Xunying Liu
arxiv.org/abs/2406.10160

@gwendolyn@mastodon.cloud
2024-06-11 06:36:34

I'm no fan of Max Karson, but when he interviewed Thaddeus Russel he laid out precisely why the "I'd kill any man who touched my daughter" is total bullshit.
and Thad had to agree that he was right.

@arXiv_qbioNC_bot@mastoxiv.page
2024-06-12 09:05:58

This arxiv.org/abs/2406.05248 has been replaced.
initial toot: mastoxiv.page/@arXiv_qbi…