Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@mgorny@social.treehouse.systems
2026-03-27 05:51:55
Content warning: Which programming language Powered James Bond's gadgets?

QBASIC.
#DaddyJoke

@inthehands@hachyderm.io
2026-02-26 19:08:44

Still, there are some other things Hypercard did we’d do well to study, even with full-scale tools. Off the top of my head:
- It richly rewarded unguided exploration. Unsuccessful experimentation had a way of leading to paths forward, not just dead ends.
- Much of it worked by direct manipulation: if you want the thing there, you put the thing there. (Unity and Godot both sort of kind of do some descendant of this, but not with the same discoverability and transparency.)
- There was a rich library of good starting points, modifiable examples.
- An empty but functioning new project had essentially zero boilerplate. You didn’t have to have 15 files and hundreds of lines of code to get a blank page.
- Its UI made it easy-ish for newcomers to ask “What can I do with this thing here?” Modern autocomplete and inline docs kind of sort of approximate this, but in practice only for people who already have tool expertise.
- HyperTalk (the programming language) is tricky to write (it’s a p-lang), but it’s remarkably easy to read. You can peer at it with very limited knowledge and make educated guesses about its semantics, and those guesses will be mostly correct. (HyperTalk syntax tends to get the most attention when people talk about this, I think at the expense of the other things above.)

@hw@fediscience.org
2026-04-13 08:57:33

There's roughly two ways I've acquired skills in programming languages in the past: the "hard" way for writing code (e.g., "Learn Python the Hard Way"), and the "easy" way for learning to read a new programming language by skimming the language specs or leafing through a book on the topic (e.g., "The Supercollider Book").
I suppose there's a third way now for me: Reading up on software architecture design (e.g., stuff like "500 lines or less"), so that co-creation skills with large language models are improved?
For example, Yoav Rubin's article on "An Archaeology-Inspired Database" in 500 lines or less really made me think about Clojure in a new way.
Thoughts on this?
#AIResearch #Software #programming

@fanf@mendeddrum.org
2026-02-16 15:42:03

from my link log —
Towards fearless macros.
lambdaland.org/posts/2023-10-1
saved 2026-02-15

@kexpmusicbot@mastodonapp.uk
2026-04-13 07:11:27

🇺🇦 #NowPlaying on KEXP's #MidnightInAPerfectWorld
Patience:
🎵 The Pressure
#Patience
diveindex.bandcamp.com/track/r
open.spotify.com/track/1o9fhhF

@Mediagazer@mstdn.social
2026-03-17 02:10:41

South Korean public broadcaster KBS partners with Sinclair to offer Korean-language programming via Sinclair's NextGen TV stations across the US (Matthew Keys/TheDesk.net)
thedesk.net/2026/03/kbs-sincla

@Cognessence@social.linux.pizza
2026-02-13 09:46:57

‘Only Embrace’ was actually called ‘Only Envelope’ for the longest time, partly because along with the emotional layer I was interested in breaking free of any use of percussion - rather implying rhythm through envelopes programmed into the patches (along with musical use of shifting compression flaring in response to these, and then saturation that would “bloom” out in various M/S configurations.)

@tomkalei@machteburch.social
2026-04-20 10:57:28

A quine is a computer program that prints its own source code. Such a program exists in any sufficiently powerful programming language by Roger's fixed point theorem.
A quine cannot contain all of its source code as a string which it then prints. One needs a trick.
One trick is to decompose the program into 3 parts. P, a Preamble, S.quote, a tail-string S of the program quoted in the programming language, and then the literal tail string S.
The program is P S.quote S
1/2

@michabbb@social.vivaldi.net
2026-04-04 13:55:44

✨ Progressive content reveals with x-slidewire::fragment – steps through each fragment before advancing to the next slide
Auto-slide timers with config, deck & slide-level precedence for automated presentations
💻 Syntax highlighting bundled via #Phiki – no extra setup needed
Supports language attribute optional theme, font & font size overrides

@arXiv_csCL_bot@mastoxiv.page
2026-03-31 11:12:48

Replaced article(s) found for cs.CL. arxiv.org/list/cs.CL/new
[2/5]:
- POTSA: A Cross-Lingual Speech Alignment Framework for Speech-to-Text Translation
Li, Cui, Wang, Ge, Huang, Li, Peng, Lu, Tashi, Wang, Dang
arxiv.org/abs/2511.09232 mastoxiv.page/@arXiv_csCL_bot/
- Beyond Elicitation: Provision-based Prompt Optimization for Knowledge-Intensive Tasks
Yunzhe Xu, Zhuosheng Zhang, Zhe Liu
arxiv.org/abs/2511.10465 mastoxiv.page/@arXiv_csCL_bot/
- $\pi$-Attention: Periodic Sparse Transformers for Efficient Long-Context Modeling
Dong Liu, Yanxuan Yu
arxiv.org/abs/2511.10696 mastoxiv.page/@arXiv_csCL_bot/
- Based on Data Balancing and Model Improvement for Multi-Label Sentiment Classification Performanc...
Zijin Su, Huanzhu Lyu, Yuren Niu, Yiming Liu
arxiv.org/abs/2511.14073 mastoxiv.page/@arXiv_csCL_bot/
- HEAD-QA v2: Expanding a Healthcare Benchmark for Reasoning
Alexis Correa-Guill\'en, Carlos G\'omez-Rodr\'iguez, David Vilares
arxiv.org/abs/2511.15355 mastoxiv.page/@arXiv_csCL_bot/
- Towards Hyper-Efficient RAG Systems in VecDBs: Distributed Parallel Multi-Resolution Vector Search
Dong Liu, Yanxuan Yu
arxiv.org/abs/2511.16681 mastoxiv.page/@arXiv_csCL_bot/
- Estonian WinoGrande Dataset: Comparative Analysis of LLM Performance on Human and Machine Transla...
Marii Ojastu, Hele-Andra Kuulmets, Aleksei Dorkin, Marika Borovikova, Dage S\"arg, Kairit Sirts
arxiv.org/abs/2511.17290 mastoxiv.page/@arXiv_csCL_bot/
- A Systematic Study of In-the-Wild Model Merging for Large Language Models
O\u{g}uz Ka\u{g}an Hitit, Leander Girrbach, Zeynep Akata
arxiv.org/abs/2511.21437 mastoxiv.page/@arXiv_csCL_bot/
- CREST: Universal Safety Guardrails Through Cluster-Guided Cross-Lingual Transfer
Lavish Bansal, Naman Mishra
arxiv.org/abs/2512.02711 mastoxiv.page/@arXiv_csCL_bot/
- Multilingual Medical Reasoning for Question Answering with Large Language Models
Pietro Ferrazzi, Aitor Soroa, Rodrigo Agerri
arxiv.org/abs/2512.05658 mastoxiv.page/@arXiv_csCL_bot/
- OnCoCo 1.0: A Public Dataset for Fine-Grained Message Classification in Online Counseling Convers...
Albrecht, Lehmann, Poltermann, Rudolph, Steigerwald, Stieler
arxiv.org/abs/2512.09804 mastoxiv.page/@arXiv_csCL_bot/
- Does Tone Change the Answer? Evaluating Prompt Politeness Effects on Modern LLMs: GPT, Gemini, an...
Hanyu Cai, Binqi Shen, Lier Jin, Lan Hu, Xiaojing Fan
arxiv.org/abs/2512.12812 mastoxiv.page/@arXiv_csCL_bot/
- Beg to Differ: Understanding Reasoning-Answer Misalignment Across Languages
Ovalle, Ross, Ruder, Williams, Ullrich, Ibrahim, Sagun
arxiv.org/abs/2512.22712 mastoxiv.page/@arXiv_csCL_bot/
- Activation Steering for Masked Diffusion Language Models
Adi Shnaidman, Erin Feiglin, Osher Yaari, Efrat Mentel, Amit Levi, Raz Lapid
arxiv.org/abs/2512.24143 mastoxiv.page/@arXiv_csCL_bot/
- JMedEthicBench: A Multi-Turn Conversational Benchmark for Evaluating Medical Safety in Japanese L...
Liu, Li, Niu, Zhang, Xun, Hou, Wang, Iwasawa, Matsuo, Hatakeyama-Sato
arxiv.org/abs/2601.01627 mastoxiv.page/@arXiv_csCL_bot/
- FACTUM: Mechanistic Detection of Citation Hallucination in Long-Form RAG
Dassen, Kotula, Murray, Yates, Lawrie, Kayi, Mayfield, Duh
arxiv.org/abs/2601.05866 mastoxiv.page/@arXiv_csCL_bot/
- {\dag}DAGGER: Distractor-Aware Graph Generation for Executable Reasoning in Math Problems
Zabir Al Nazi, Shubhashis Roy Dipta, Sudipta Kar
arxiv.org/abs/2601.06853 mastoxiv.page/@arXiv_csCL_bot/
- Symphonym: Universal Phonetic Embeddings for Cross-Script Name Matching
Stephen Gadd
arxiv.org/abs/2601.06932 mastoxiv.page/@arXiv_csCL_bot/
- LLMs versus the Halting Problem: Revisiting Program Termination Prediction
Sultan, Armengol-Estape, Kesseli, Vanegue, Shahaf, Adi, O'Hearn
arxiv.org/abs/2601.18987 mastoxiv.page/@arXiv_csCL_bot/
- MuVaC: A Variational Causal Framework for Multimodal Sarcasm Understanding in Dialogues
Diandian Guo, Fangfang Yuan, Cong Cao, Xixun Lin, Chuan Zhou, Hao Peng, Yanan Cao, Yanbing Liu
arxiv.org/abs/2601.20451 mastoxiv.page/@arXiv_csCL_bot/
toXiv_bot_toot