Tootfinder

Opt-in global Mastodon full text search. Join the index!

@muz4now@mastodon.world
2025-07-30 19:34:07

Human beings need love, so everyone is someone "For Those In Need of Love Today"
#InstrumentalMusic #MellowMusic #NowPlaying

@Mediagazer@mstdn.social
2025-07-30 23:35:54

LADbible launches Betches, the US women's media brand it acquired in 2023, in the UK aiming to fill a "gap in the market" for Gen Z and millennial women (Charlotte Tobitt/Press Gazette)
pressgazette.co.uk/publish…

@rmdes@mstdn.social
2025-07-30 15:37:15

Débat de 2023 sur les victimes de sectes, de l’emprise Š la reconstruction. Quels sont les mécanismes Š l'oeuvre, comment identifier le discours sectaire, comment trouver de l'aide pour s'en sortir et se reconstruire ? chardonsbleus.org/debat-victim

@arXiv_mathNT_bot@mastoxiv.page
2025-07-30 09:05:22

Simultaneous Diophantine approximation on the three dimensional Veronese curve
Dmitry Badziahin
arxiv.org/abs/2507.21401 arxiv.org/pdf/2507…

@newsie@darktundra.xyz
2025-07-30 16:33:49

IBM: Average cost of a data breach in US shoots to record $10 million therecord.media/ibm-data-breac

@arXiv_csDS_bot@mastoxiv.page
2025-07-28 08:31:51

Edge-weighted Matching in the Dark
Zhiyi Huang, Enze Sun, Xiaowei Wu, Jiahao Zhao
arxiv.org/abs/2507.19366 arxiv.org/pdf/2507.19366

@arXiv_physicssocph_bot@mastoxiv.page
2025-07-29 09:31:51

Greening Schoolyards and Urban Property Values: A Systematic Review of Geospatial and Statistical Evidence
Mahshid Gorjian
arxiv.org/abs/2507.19934

@arXiv_astrophEP_bot@mastoxiv.page
2025-07-30 09:28:52

A Dormant Captured Oort Cloud Comet Awakens: (18916) 2000 OG44
Colin Orion Chandler, William J. Oldroyd, Chadwick A. Trujillo, Dmitrii E. Vavilov, William A. Burris
arxiv.org/abs/2507.21324

@arXiv_econGN_bot@mastoxiv.page
2025-07-29 09:02:01

Assessing the Sensitivities of Input-Output Methods for Natural Hazard-Induced Power Outage Macroeconomic Impacts
Matthew Sprintson, Edward Oughton
arxiv.org/abs/2507.19989

@arXiv_csCC_bot@mastoxiv.page
2025-07-28 07:47:31

Downward self-reducibility in the total function polynomial hierarchy
Karthik Gajulapalli, Surendra Ghentiyala, Zeyong Li, Sidhant Saraogi
arxiv.org/abs/2507.19108

@kamasystems@social.linux.pizza
2025-06-25 12:51:19

Lockheed Constellation sobre Santiago de Cuba archivo.kamasystems.nl/es/2023

Lockheed VC-121A Constellation
Lockheed VC-121A Constellation
@muz4now@mastodon.world
2025-07-31 03:51:00

Meditation Suite 2 #NowPlaying #nowplaying #originalmusic #streaming

@arXiv_physicssocph_bot@mastoxiv.page
2025-08-29 08:43:31

New inequality indicators for team ranking in multi-stage female professional cyclist races
Marcel Ausloos
arxiv.org/abs/2508.20113 arxiv.o…

@a_j_millar@fediscience.org
2025-07-24 11:56:24

📈 Bioscience researchers shared data in 92% of articles that we manually evaluated from 2023. In the chart 👇 orange shading shows 45% of articles shared ALL the relevant data, up from 7% in 2014👏. Sharing varied by data type as expected, 🧬 vs. 🔬, among several other factors.
Thanks to BIH QUEST @ChariteBerlin for ODDPub, which gave a parallel, programmatic evaluation.

In contrast, testing an international sample of circadian, neuroscience and mental health articles by the same manual method … 2/3
Edited for Alt-text.

@Techmeme@techhub.social
2025-07-31 20:22:34

Figma's stock closes up 250% at $115.50, after Figma sold shares at $33 in its IPO, hitting a ~$68B valuation; Adobe's $20B Figma acquisition fell apart in 2023 (Jordan Novet/CNBC)
cnbc.com/2025/07/31/figma-fig-

@arXiv_csCC_bot@mastoxiv.page
2025-07-25 07:31:11

Fagin's Theorem for Semiring Turing Machines
Guillermo Badia, Manfred Droste, Thomas Eiter, Rafael Kiesel, Carles Noguera, Erik Paul
arxiv.org/abs/2507.18375

@Techmeme@techhub.social
2025-07-08 23:31:26

Apple says its design team will report to Tim Cook after Jeff Williams retires; Williams was overseeing the team following Evans Hankey's departure in 2023 (Chance Miller/9to5Mac)
9to5mac.com/2025/07/08/apple-d

@arXiv_quantph_bot@mastoxiv.page
2025-07-11 10:05:31

Strong converse rate for asymptotic hypothesis testing in type III
Nicholas Laracuente, Marius Junge
arxiv.org/abs/2507.07989

@thomasfuchs@hachyderm.io
2025-07-31 15:27:27

Yes, it’s absurd.
LLMs don’t scale to reach “AGI”. That is mathematically proven[1], so it doesn’t matter how large your data center is.
But that’s not the main reason why this is absurd—as a society we shouldn’t spend these enormous resources and lasting environmental damage on this _even if it would work_.
[1] irisvanrooijcogsci.com/2023/09

@arXiv_mathGT_bot@mastoxiv.page
2025-07-09 08:34:22

On Jiang's Bounded Index Property for products of nilmanifolds
Peng Wang, Qiang Zhang
arxiv.org/abs/2507.06132 ar…

@Mediagazer@mstdn.social
2025-07-07 08:05:31

Q&A with British Library CEO Rebecca Lawrence on dealing with the aftermath of a major October 2023 cyberattack, AI scraping, AI for text analysis, and more (Mishal Husain/Bloomberg)

@vosje62@mastodon.nl
2025-05-31 17:33:30

Waarom er meer oversterfte is bij hitte, pol... | #EenVandaag
eenvandaag.avrotros.nl/artikel

@arXiv_qfinTR_bot@mastoxiv.page
2025-07-16 08:57:31

Kernel Learning for Mean-Variance Trading Strategies
Owen Futter, Nicola Muca Cirone, Blanka Horvath
arxiv.org/abs/2507.10701

@arXiv_csAI_bot@mastoxiv.page
2025-07-31 09:17:51

The Incomplete Bridge: How AI Research (Mis)Engages with Psychology
Han Jiang, Pengda Wang, Xiaoyuan Yi, Xing Xie, Ziang Xiao
arxiv.org/abs/2507.22847

@NFL@darktundra.xyz
2025-07-31 11:49:12

'I think this is really his time': Why the Raiders need Tyree Wilson to live up to first-round billing espn.com/nfl/story/_/id/458595

@tinoeberl@mastodon.online
2025-06-06 14:16:30

Im Jahr 2024 wurden mehr als zwei Drittel aller neuen #Wohngebäude in #Deutschland primär mit #Wärmepumpen beheizt.
Ihr Anteil hat sich in zehn Jahren mehr als verdoppelt. Tr…

@arXiv_csCR_bot@mastoxiv.page
2025-07-31 09:28:51

Cryptanalysis of LC-MUME: A Lightweight Certificateless Multi-User Matchmaking Encryption for Mobile Devices
Ramprasad Sarkar
arxiv.org/abs/2507.22674

@arXiv_astrophHE_bot@mastoxiv.page
2025-07-04 09:17:31

X-ray observations of Nova Sco 2023: Spectroscopic evidence of charge exchange
Sharon Mitrani, Ehud Behar, Marina Orio, Jack Worley
arxiv.org/abs/2507.02465

@arXiv_csSD_bot@mastoxiv.page
2025-07-08 10:45:31

TTS-CtrlNet: Time varying emotion aligned text-to-speech generation with ControlNet
Jaeseok Jeong, Yuna Lee, Mingi Kwon, Youngjung Uh
arxiv.org/abs/2507.04349

@BBC6MusicBot@mastodonapp.uk
2025-07-20 13:00:31

🇺🇦 #NowPlaying on #BBC6Music's #GuyGarveysFinestHour
Aretha Franklin:
🎵 Save Me
#ArethaFranklin
titandavis2.bandcamp.com/track
open.spotify.com/track/7mvKpoe

@rmdes@mstdn.social
2025-06-06 14:31:03

Central Tibetan Administration Suspends Attestation Robert Spatz’s Ogyen Kunzang Choling openbuddhism.org/blog/2023/cen

@arXiv_astrophGA_bot@mastoxiv.page
2025-07-04 09:31:11

Ammonia in the hot core W51-IRS2: Maser line profiles, variability, and saturation
E. Alkhuja, C. Henkel, Y. T. Yan, B. Winkel, Y. Gong, G. Wu, T. L. Wilson, A. Wootten, A. Malawi
arxiv.org/abs/2507.02214

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2025-06-06 07:31:54

Pressure-Driven Metallicity in {\AA}ngstr\"om-Thickness 2D Bismuth and Layer-Selective Ohmic Contact to MoS2
Shuhua Wang, Shibo Fang, Qiang Li, Yunliang Yue, Zongmeng Yang, Xiaotian Sun, Jing Lu, Chit Siong Lau, L. K. Ang, Lain-Jong Li, Yee Sin Ang
arxiv.org/abs/2506.05133

@muz4now@mastodon.world
2025-07-31 16:41:01

Cycle of Creativity – Inspire, Create, Rest, Repeat #creativity #inspiration muz4now.com/2023/cy…

@Techmeme@techhub.social
2025-07-06 19:25:31

Q&A with British Library CEO Rebecca Lawrence on dealing with the aftermath of a major October 2023 cyberattack, AI scraping, AI for text analysis, and more (Mishal Husain/Bloomberg)
bloomberg.com/features/2025-re

@relcfp@mastodon.social
2025-07-31 06:10:22

Approaching Deadline: CFP: Conference: American Jewish Landscapes after October 7th
ift.tt/truigRx
Pirino on Suzuki, 'Humanitarian Internationalism Under Empire: The Global Evolution of the Japanese…
via Input 4 RELCFP

@arXiv_csDS_bot@mastoxiv.page
2025-08-12 10:07:03

Nearly Optimal Bounds for Stochastic Online Sorting
Yang Hu
arxiv.org/abs/2508.07823 arxiv.org/pdf/2508.07823

@rmdes@mstdn.social
2025-05-31 17:29:07

Débat de 2023 sur les victimes de sectes, de l’emprise Š la reconstruction. Quels sont les mécanismes Š l'oeuvre, comment identifier le discours sectaire, comment trouver de l'aide pour s'en sortir et se reconstruire ? chardonsbleus.org/debat-victim

@tinoeberl@mastodon.online
2025-08-01 07:11:08

Trotz der UN-Vereinbarung von 2023, die globale #Energie aus #Erneuerbaren bis 2030 zu verdreifachen, haben nur wenige Länder ihre Ziele angepasst.
Laut Analyse bleibt die Welt deutlich hinter dem 11-TW-Ziel zurück. Vor allem große Emittenten wie die

@arXiv_mathAP_bot@mastoxiv.page
2025-06-04 07:47:31

Change of bifurcation type in 2D free boundary model of a moving cell with nonlinear diffusion
Leonid Berlyand, Oleksii Krupchytskyi, Tim Laux
arxiv.org/abs/2506.03138

@tiotasram@kolektiva.social
2025-07-31 16:25:48

LLM coding is the opposite of DRY
An important principle in software engineering is DRY: Don't Repeat Yourself. We recognize that having the same code copied in more than one place is bad for several reasons:
1. It makes the entire codebase harder to read.
2. It increases maintenance burden, since any problems in the duplicated code need to be solved in more than one place.
3. Because it becomes possible for the copies to drift apart if changes to one aren't transferred to the other (maybe the person making the change has forgotten there was a copy) it makes the code more error-prone and harder to debug.
All modern programming languages make it almost entirely unnecessary to repeat code: we can move the repeated code into a "function" or "module" and then reference it from all the different places it's needed. At a larger scale, someone might write an open-source "library" of such functions or modules and instead of re-implementing that functionality ourselves, we can use their code, with an acknowledgement. Using another person's library this way is complicated, because now you're dependent on them: if they stop maintaining it or introduce bugs, you've inherited a problem, but still, you could always copy their project and maintain your own version, and it would be not much more work than if you had implemented stuff yourself from the start. It's a little more complicated than this, but the basic principle holds, and it's a foundational one for software development in general and the open-source movement in particular. The network of "citations" as open-source software builds on other open-source software and people contribute patches to each others' projects is a lot of what makes the movement into a community, and it can lead to collaborations that drive further development. So the DRY principle is important at both small and large scales.
Unfortunately, the current crop of hyped-up LLM coding systems from the big players are antithetical to DRY at all scales:
- At the library scale, they train on open source software but then (with some unknown frequency) replicate parts of it line-for-line *without* any citation [1]. The person who was using the LLM has no way of knowing that this happened, or even any way to check for it. In theory the LLM company could build a system for this, but it's not likely to be profitable unless the courts actually start punishing these license violations, which doesn't seem likely based on results so far and the difficulty of finding out that the violations are happening. By creating these copies (and also mash-ups, along with lots of less-problematic stuff), the LLM users (enabled and encouraged by the LLM-peddlers) are directly undermining the DRY principle. If we see what the big AI companies claim to want, which is a massive shift towards machine-authored code, DRY at the library scale will effectively be dead, with each new project simply re-implementing the functionality it needs instead of every using a library. This might seem to have some upside, since dependency hell is a thing, but the downside in terms of comprehensibility and therefore maintainability, correctness, and security will be massive. The eventual lack of new high-quality DRY-respecting code to train the models on will only make this problem worse.
- At the module & function level, AI is probably prone to re-writing rather than re-using the functions or needs, especially with a workflow where a human prompts it for many independent completions. This part I don't have direct evidence for, since I don't use LLM coding models myself except in very specific circumstances because it's not generally ethical to do so. I do know that when it tries to call existing functions, it often guesses incorrectly about the parameters they need, which I'm sure is a headache and source of bugs for the vibe coders out there. An AI could be designed to take more context into account and use existing lookup tools to get accurate function signatures and use them when generating function calls, but even though that would probably significantly improve output quality, I suspect it's the kind of thing that would be seen as too-baroque and thus not a priority. Would love to hear I'm wrong about any of this, but I suspect the consequences are that any medium-or-larger sized codebase written with LLM tools will have significant bloat from duplicate functionality, and will have places where better use of existing libraries would have made the code simpler. At a fundamental level, a principle like DRY is not something that current LLM training techniques are able to learn, and while they can imitate it from their training sets to some degree when asked for large amounts of code, when prompted for many smaller chunks, they're asymptotically likely to violate it.
I think this is an important critique in part because it cuts against the argument that "LLMs are the modern compliers, if you reject them you're just like the people who wanted to keep hand-writing assembly code, and you'll be just as obsolete." Compilers actually represented a great win for abstraction, encapsulation, and DRY in general, and they supported and are integral to open source development, whereas LLMs are set to do the opposite.
[1] to see what this looks like in action in prose, see the example on page 30 of the NYTimes copyright complaint against OpenAI (#AI #GenAI #LLMs #VibeCoding

@arXiv_statAP_bot@mastoxiv.page
2025-07-31 08:48:51

A comparison of variable selection methods and predictive models for postoperative bowel surgery complications
\"Ozge \c{S}ahin, Annemiek Kwast, Annemieke Witteveen, Tina Nane
arxiv.org/abs/2507.22771

@muz4now@mastodon.world
2025-07-31 21:51:01

In Memory Of Tom #PianoImprov #NowPlaying #nowplaying #originalmusic