Tootfinder

Opt-in global Mastodon full text search. Join the index!

@acka47@openbiblio.social
2026-02-20 08:10:08

Ich fühle mich dieses Jahr bei der #bibliocon26 in einer Session u.a. mit @… & @… sehr gut aufgehoben:

TK 2: Datenkuratierung
Raum V / 5
Donnerstag, 21. Mai, 02:00 PM – 04:00 PM
Daten im Netzwerk

Linked Data Reloaded: Der DNB SPARQL Service
Referent:in: Tracy Arndt (Leipzig, Deutschland) 

URIs, welche URIs? Ein grundlegendes Problem für FAIR und Linked Open Data
Referent:in: Marco Klindt (Berlin, Deutschland)
Referent:in: Alexander Winkler (Berlin, Deutschland) 

Vernetzte Wissensgraphen: Community-basierte Erstellung von semantischen Metadatenprofilen mit AIMS
Referent:in: Max Sc…
@arXiv_csLG_bot@mastoxiv.page
2025-12-22 11:50:19

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/3]:
- Optimizing Text Search: A Novel Pattern Matching Algorithm Based on Ukkonen's Approach
Xinyu Guan, Shaohua Zhang
arxiv.org/abs/2512.16927 mastoxiv.page/@arXiv_csDS_bot/
- SpIDER: Spatially Informed Dense Embedding Retrieval for Software Issue Localization
Shravan Chaudhari, Rahul Thomas Jacob, Mononito Goswami, Jiajun Cao, Shihab Rashid, Christian Bock
arxiv.org/abs/2512.16956 mastoxiv.page/@arXiv_csSE_bot/
- MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval
Saksham Sahai Srivastava, Haoyu He
arxiv.org/abs/2512.16962 mastoxiv.page/@arXiv_csCR_bot/
- Colormap-Enhanced Vision Transformers for MRI-Based Multiclass (4-Class) Alzheimer's Disease Clas...
Faisal Ahmed
arxiv.org/abs/2512.16964 mastoxiv.page/@arXiv_eessIV_bo
- Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows
Wanghan Xu, et al.
arxiv.org/abs/2512.16969 mastoxiv.page/@arXiv_csAI_bot/
- PAACE: A Plan-Aware Automated Agent Context Engineering Framework
Kamer Ali Yuksel
arxiv.org/abs/2512.16970 mastoxiv.page/@arXiv_csAI_bot/
- A Women's Health Benchmark for Large Language Models
Elisabeth Gruber, et al.
arxiv.org/abs/2512.17028 mastoxiv.page/@arXiv_csCL_bot/
- Perturb Your Data: Paraphrase-Guided Training Data Watermarking
Pranav Shetty, Mirazul Haque, Petr Babkin, Zhiqiang Ma, Xiaomo Liu, Manuela Veloso
arxiv.org/abs/2512.17075 mastoxiv.page/@arXiv_csCL_bot/
- Disentangled representations via score-based variational autoencoders
Benjamin S. H. Lyo, Eero P. Simoncelli, Cristina Savin
arxiv.org/abs/2512.17127 mastoxiv.page/@arXiv_statML_bo
- Biosecurity-Aware AI: Agentic Risk Auditing of Soft Prompt Attacks on ESM-Based Variant Predictors
Huixin Zhan
arxiv.org/abs/2512.17146 mastoxiv.page/@arXiv_csCR_bot/
- Application of machine learning to predict food processing level using Open Food Facts
Arora, Chauhan, Rana, Aditya, Bhagat, Kumar, Kumar, Semar, Singh, Bagler
arxiv.org/abs/2512.17169 mastoxiv.page/@arXiv_qbioBM_bo
- Systemic Risk Radar: A Multi-Layer Graph Framework for Early Market Crash Warning
Sandeep Neela
arxiv.org/abs/2512.17185 mastoxiv.page/@arXiv_qfinRM_bo
- Do Foundational Audio Encoders Understand Music Structure?
Keisuke Toyama, Zhi Zhong, Akira Takahashi, Shusuke Takahashi, Yuki Mitsufuji
arxiv.org/abs/2512.17209 mastoxiv.page/@arXiv_csSD_bot/
- CheXPO-v2: Preference Optimization for Chest X-ray VLMs with Knowledge Graph Consistency
Xiao Liang, Yuxuan An, Di Wang, Jiawei Hu, Zhicheng Jiao, Bin Jing, Quan Wang
arxiv.org/abs/2512.17213 mastoxiv.page/@arXiv_csCV_bot/
- Machine Learning Assisted Parameter Tuning on Wavelet Transform Amorphous Radial Distribution Fun...
Deriyan Senjaya, Stephen Ekaputra Limantoro
arxiv.org/abs/2512.17245 mastoxiv.page/@arXiv_condmatmt
- AlignDP: Hybrid Differential Privacy with Rarity-Aware Protection for LLMs
Madhava Gaikwad
arxiv.org/abs/2512.17251 mastoxiv.page/@arXiv_csCR_bot/
- Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning
Baolei Zhang, Minghong Fang, Zhuqing Liu, Biao Yi, Peizhao Zhou, Yuan Wang, Tong Li, Zheli Liu
arxiv.org/abs/2512.17254 mastoxiv.page/@arXiv_csCR_bot/
- Verifiability-First Agents: Provable Observability and Lightweight Audit Agents for Controlling A...
Abhivansh Gupta
arxiv.org/abs/2512.17259 mastoxiv.page/@arXiv_csMA_bot/
- Warmer for Less: A Cost-Efficient Strategy for Cold-Start Recommendations at Pinterest
Saeed Ebrahimi, Weijie Jiang, Jaewon Yang, Olafur Gudmundsson, Yucheng Tu, Huizhong Duan
arxiv.org/abs/2512.17277 mastoxiv.page/@arXiv_csIR_bot/
- LibriVAD: A Scalable Open Dataset with Deep Learning Benchmarks for Voice Activity Detection
Ioannis Stylianou, Achintya kr. Sarkar, Nauman Dawalatabad, James Glass, Zheng-Hua Tan
arxiv.org/abs/2512.17281 mastoxiv.page/@arXiv_csSD_bot/
- Penalized Fair Regression for Multiple Groups in Chronic Kidney Disease
Carter H. Nakamoto, Lucia Lushi Chen, Agata Foryciarz, Sherri Rose
arxiv.org/abs/2512.17340 mastoxiv.page/@arXiv_statME_bo
toXiv_bot_toot

@tante@tldr.nettime.org
2026-01-19 16:17:59

Datacenters/compute should be a thing that states provide to their citizens kind of as public utility that is run only on renewables in regions where the negative externalities would not affect people (requiring a public vote with a high threshold for acceptance). with everyone getting a fair allotment for their own use (and the ability to pull it together in associations). Decisions on added infrastructure etc. would be made democratically with the locations of those data centers being comp…

@metacurity@infosec.exchange
2026-01-16 13:57:54

Korea's Fair Trade Commission is threatening to shut down Coupang's business altogether in connection with a massive data breach.
koreatimes.co.kr/business/comp

@me@mastodon.peterjanes.ca
2026-02-21 03:59:13

I wonder if I currently have the *longest* #11ty build out there? 25 minutes and 19 seconds!
To be fair, 24 minutes and 50 seconds of that is retrieving remote data, which unfortunately can't be sped up. From cache, though, it's a nifty 39 seconds to write 3600 files, before any significant effort to optimize.

@doktrock@toad.social
2026-01-30 17:16:58

Of interest to experimental petrologists, upcoming virtual workshop "to discuss and plan to address systematic bias in predictive models caused by the way trace element partitioning data is currently published." #geology #geochemistry ⚒️

@mgorny@social.treehouse.systems
2026-01-18 18:04:19

Cynicism, "AI"
I've been pointed out the "Reflections on 2025" post by Samuel Albanie [1]. The author's writing style makes it quite a fun, I admit.
The first part, "The Compute Theory of Everything" is an optimistic piece on "#AI". Long story short, poor "AI researchers" have been struggling for years because of predominant misconception that "machines should have been powerful enough". Fortunately, now they can finally get their hands on the kind of power that used to be only available to supervillains, and all they have to do is forget about morals, agree that their research will be used to murder millions of people, and a few more millions will die as a side effect of the climate crisis. But I'm digressing.
The author is referring to an essay by Hans Moravec, "The Role of Raw Power in Intelligence" [2]. It's also quite an interesting read, starting with a chapter on how intelligence evolved independently at least four times. The key point inferred from that seems to be, that all we need is more computing power, and we'll eventually "brute-force" all AI-related problems (or die trying, I guess).
As a disclaimer, I have to say I'm not a biologist. Rather just a random guy who read a fair number of pieces on evolution. And I feel like the analogies brought here are misleading at best.
Firstly, there seems to be an assumption that evolution inexorably leads to higher "intelligence", with a certain implicit assumption on what intelligence is. Per that assumption, any animal that gets "brainier" will eventually become intelligent. However, this seems to be missing the point that both evolution and learning doesn't operate in a void.
Yes, many animals did attain a certain level of intelligence, but they attained it in a long chain of development, while solving specific problems, in specific bodies, in specific environments. I don't think that you can just stuff more brains into a random animal, and expect it to attain human intelligence; and the same goes for a computer — you can't expect that given more power, algorithms will eventually converge on human-like intelligence.
Secondly, and perhaps more importantly, what evolution did succeed at first is achieving neural networks that are far more energy efficient than whatever computers are doing today. Even if indeed "computing power" paved the way for intelligence, what came first is extremely efficient "hardware". Nowadays, human seem to be skipping that part. Optimizing is hard, so why bother with it? We can afford bigger data centers, we can afford to waste more energy, we can afford to deprive people of drinking water, so let's just skip to the easy part!
And on top of that, we're trying to squash hundreds of millions of years of evolution into… a decade, perhaps? What could possibly go wrong?
[1] #NoAI #NoLLM #LLM

@teledyn@mstdn.ca
2025-12-23 20:32:42

I think it is fair to say that most well-to-do people would not uproot generational homelands and livelihood to take a chance being an unknown in a foreign landscape, even if one has connections there, and yet, this current migration dwarfs the post-WWII flux. Sure, maybe I myself set out on the horizon in search of (mild) adventure, but I find it hard to believe 4% of humanity are crazy like me.
That alone speaks volumes at how successful we are at governing the planet, assuming, of course, that data matters in the least, which, presently, it does not.
Migration and Human Mobility: Key Figures
migrationdataportal.org/key-fi