QBASIC.
#DaddyJoke
Still, there are some other things Hypercard did we’d do well to study, even with full-scale tools. Off the top of my head:
- It richly rewarded unguided exploration. Unsuccessful experimentation had a way of leading to paths forward, not just dead ends.
- Much of it worked by direct manipulation: if you want the thing there, you put the thing there. (Unity and Godot both sort of kind of do some descendant of this, but not with the same discoverability and transparency.)
- There was a rich library of good starting points, modifiable examples.
- An empty but functioning new project had essentially zero boilerplate. You didn’t have to have 15 files and hundreds of lines of code to get a blank page.
- Its UI made it easy-ish for newcomers to ask “What can I do with this thing here?” Modern autocomplete and inline docs kind of sort of approximate this, but in practice only for people who already have tool expertise.
- HyperTalk (the programming language) is tricky to write (it’s a p-lang), but it’s remarkably easy to read. You can peer at it with very limited knowledge and make educated guesses about its semantics, and those guesses will be mostly correct. (HyperTalk syntax tends to get the most attention when people talk about this, I think at the expense of the other things above.)
There's roughly two ways I've acquired skills in programming languages in the past: the "hard" way for writing code (e.g., "Learn Python the Hard Way"), and the "easy" way for learning to read a new programming language by skimming the language specs or leafing through a book on the topic (e.g., "The Supercollider Book").
I suppose there's a third way now for me: Reading up on software architecture design (e.g., stuff like "500 lines or less"), so that co-creation skills with large language models are improved?
For example, Yoav Rubin's article on "An Archaeology-Inspired Database" in 500 lines or less really made me think about Clojure in a new way.
Thoughts on this?
#AIResearch #Software #programming
from my link log —
Towards fearless macros.
https://lambdaland.org/posts/2023-10-17_fearless_macros/
saved 2026-02-15 https://
🇺🇦 #NowPlaying on KEXP's #MidnightInAPerfectWorld
Patience:
🎵 The Pressure
#Patience
https://diveindex.bandcamp.com/track/rewind-your-patience
https://open.spotify.com/track/1o9fhhFl9nYKi6qgx24gy6
South Korean public broadcaster KBS partners with Sinclair to offer Korean-language programming via Sinclair's NextGen TV stations across the US (Matthew Keys/TheDesk.net)
https://thedesk.net/2026/03/kbs-sinclair-tv-channel-pact-korean/
‘Only Embrace’ was actually called ‘Only Envelope’ for the longest time, partly because along with the emotional layer I was interested in breaking free of any use of percussion - rather implying rhythm through envelopes programmed into the patches (along with musical use of shifting compression flaring in response to these, and then saturation that would “bloom” out in various M/S configurations.)
A quine is a computer program that prints its own source code. Such a program exists in any sufficiently powerful programming language by Roger's fixed point theorem.
A quine cannot contain all of its source code as a string which it then prints. One needs a trick.
One trick is to decompose the program into 3 parts. P, a Preamble, S.quote, a tail-string S of the program quoted in the programming language, and then the literal tail string S.
The program is P S.quote S
1/2
✨ Progressive content reveals with x-slidewire::fragment – steps through each fragment before advancing to the next slide
Auto-slide timers with config, deck & slide-level precedence for automated presentations
💻 Syntax highlighting bundled via #Phiki – no extra setup needed
Supports language attribute optional theme, font & font size overrides
Replaced article(s) found for cs.CL. https://arxiv.org/list/cs.CL/new
[2/5]:
- POTSA: A Cross-Lingual Speech Alignment Framework for Speech-to-Text Translation
Li, Cui, Wang, Ge, Huang, Li, Peng, Lu, Tashi, Wang, Dang
https://arxiv.org/abs/2511.09232 https://mastoxiv.page/@arXiv_csCL_bot/115541846907664054
- Beyond Elicitation: Provision-based Prompt Optimization for Knowledge-Intensive Tasks
Yunzhe Xu, Zhuosheng Zhang, Zhe Liu
https://arxiv.org/abs/2511.10465 https://mastoxiv.page/@arXiv_csCL_bot/115547607561282911
- $\pi$-Attention: Periodic Sparse Transformers for Efficient Long-Context Modeling
Dong Liu, Yanxuan Yu
https://arxiv.org/abs/2511.10696 https://mastoxiv.page/@arXiv_csCL_bot/115564418836654965
- Based on Data Balancing and Model Improvement for Multi-Label Sentiment Classification Performanc...
Zijin Su, Huanzhu Lyu, Yuren Niu, Yiming Liu
https://arxiv.org/abs/2511.14073 https://mastoxiv.page/@arXiv_csCL_bot/115575715073023141
- HEAD-QA v2: Expanding a Healthcare Benchmark for Reasoning
Alexis Correa-Guill\'en, Carlos G\'omez-Rodr\'iguez, David Vilares
https://arxiv.org/abs/2511.15355 https://mastoxiv.page/@arXiv_csCL_bot/115581410328165116
- Towards Hyper-Efficient RAG Systems in VecDBs: Distributed Parallel Multi-Resolution Vector Search
Dong Liu, Yanxuan Yu
https://arxiv.org/abs/2511.16681 https://mastoxiv.page/@arXiv_csCL_bot/115603508442305146
- Estonian WinoGrande Dataset: Comparative Analysis of LLM Performance on Human and Machine Transla...
Marii Ojastu, Hele-Andra Kuulmets, Aleksei Dorkin, Marika Borovikova, Dage S\"arg, Kairit Sirts
https://arxiv.org/abs/2511.17290 https://mastoxiv.page/@arXiv_csCL_bot/115604083224487885
- A Systematic Study of In-the-Wild Model Merging for Large Language Models
O\u{g}uz Ka\u{g}an Hitit, Leander Girrbach, Zeynep Akata
https://arxiv.org/abs/2511.21437 https://mastoxiv.page/@arXiv_csCL_bot/115621178703846052
- CREST: Universal Safety Guardrails Through Cluster-Guided Cross-Lingual Transfer
Lavish Bansal, Naman Mishra
https://arxiv.org/abs/2512.02711 https://mastoxiv.page/@arXiv_csCL_bot/115655090475535157
- Multilingual Medical Reasoning for Question Answering with Large Language Models
Pietro Ferrazzi, Aitor Soroa, Rodrigo Agerri
https://arxiv.org/abs/2512.05658 https://mastoxiv.page/@arXiv_csCL_bot/115683267711014189
- OnCoCo 1.0: A Public Dataset for Fine-Grained Message Classification in Online Counseling Convers...
Albrecht, Lehmann, Poltermann, Rudolph, Steigerwald, Stieler
https://arxiv.org/abs/2512.09804 https://mastoxiv.page/@arXiv_csCL_bot/115700409397020978
- Does Tone Change the Answer? Evaluating Prompt Politeness Effects on Modern LLMs: GPT, Gemini, an...
Hanyu Cai, Binqi Shen, Lier Jin, Lan Hu, Xiaojing Fan
https://arxiv.org/abs/2512.12812 https://mastoxiv.page/@arXiv_csCL_bot/115729149622659403
- Beg to Differ: Understanding Reasoning-Answer Misalignment Across Languages
Ovalle, Ross, Ruder, Williams, Ullrich, Ibrahim, Sagun
https://arxiv.org/abs/2512.22712 https://mastoxiv.page/@arXiv_csCL_bot/115808161882146194
- Activation Steering for Masked Diffusion Language Models
Adi Shnaidman, Erin Feiglin, Osher Yaari, Efrat Mentel, Amit Levi, Raz Lapid
https://arxiv.org/abs/2512.24143 https://mastoxiv.page/@arXiv_csCL_bot/115819533211103315
- JMedEthicBench: A Multi-Turn Conversational Benchmark for Evaluating Medical Safety in Japanese L...
Liu, Li, Niu, Zhang, Xun, Hou, Wang, Iwasawa, Matsuo, Hatakeyama-Sato
https://arxiv.org/abs/2601.01627 https://mastoxiv.page/@arXiv_csCL_bot/115847901607405421
- FACTUM: Mechanistic Detection of Citation Hallucination in Long-Form RAG
Dassen, Kotula, Murray, Yates, Lawrie, Kayi, Mayfield, Duh
https://arxiv.org/abs/2601.05866 https://mastoxiv.page/@arXiv_csCL_bot/115881545684182376
- {\dag}DAGGER: Distractor-Aware Graph Generation for Executable Reasoning in Math Problems
Zabir Al Nazi, Shubhashis Roy Dipta, Sudipta Kar
https://arxiv.org/abs/2601.06853 https://mastoxiv.page/@arXiv_csCL_bot/115887753245730019
- Symphonym: Universal Phonetic Embeddings for Cross-Script Name Matching
Stephen Gadd
https://arxiv.org/abs/2601.06932 https://mastoxiv.page/@arXiv_csCL_bot/115887767008671765
- LLMs versus the Halting Problem: Revisiting Program Termination Prediction
Sultan, Armengol-Estape, Kesseli, Vanegue, Shahaf, Adi, O'Hearn
https://arxiv.org/abs/2601.18987 https://mastoxiv.page/@arXiv_csCL_bot/115972010510378715
- MuVaC: A Variational Causal Framework for Multimodal Sarcasm Understanding in Dialogues
Diandian Guo, Fangfang Yuan, Cong Cao, Xixun Lin, Chuan Zhou, Hao Peng, Yanan Cao, Yanbing Liu
https://arxiv.org/abs/2601.20451 https://mastoxiv.page/@arXiv_csCL_bot/115977891530875024
toXiv_bot_toot