2025-12-14 15:54:56
Cursor launches Visual Editor, a vibe-coding product for designers that integrates professional design controls with natural language editing (Maxwell Zeff/Wired)
https://www.wired.com/story/cursor-launches-pro-design-tools-figma/
When writing a parser for a new (programming) language and you find yourself doing a lot of lookahead, and making design compromises to avoid that.. I wonder what a language would end up like if you parsed it backwards from the start? Like just reversed the code as a string. Would the language end up more humane? I guess this is already a thing but don't know the search term..
from my link log —
Swift regrets: a programming language design retrospective.
https://belkadan.com/blog/tags/swift-regrets/
saved 2025-11-26 https://
If I didn't need iOS 26 for testing, I’d have waited a few point releases. Fixing years of tech debt is good, but has to frustrate those who updated but might have wanted to wait.
“Apple to focus on ‘quality and underlying performance’ with iOS 27 next year: report”
https://
Proc3D: Procedural 3D Generation and Parametric Editing of 3D Shapes with Large Language Models
Fadlullah Raji, Stefano Petrangeli, Matheus Gadelha, Yu Shen, Uttaran Bhattacharya, Gang Wu
https://arxiv.org/abs/2601.12234 https://arxiv.org/pdf/2601.12234 https://arxiv.org/html/2601.12234
arXiv:2601.12234v1 Announce Type: new
Abstract: Generating 3D models has traditionally been a complex task requiring specialized expertise. While recent advances in generative AI have sought to automate this process, existing methods produce non-editable representation, such as meshes or point clouds, limiting their adaptability for iterative design. In this paper, we introduce Proc3D, a system designed to generate editable 3D models while enabling real-time modifications. At its core, Proc3D introduces procedural compact graph (PCG), a graph representation of 3D models, that encodes the algorithmic rules and structures necessary for generating the model. This representation exposes key parameters, allowing intuitive manual adjustments via sliders and checkboxes, as well as real-time, automated modifications through natural language prompts using Large Language Models (LLMs). We demonstrate Proc3D's capabilities using two generative approaches: GPT-4o with in-context learning (ICL) and a fine-tuned LLAMA-3 model. Experimental results show that Proc3D outperforms existing methods in editing efficiency, achieving more than 400x speedup over conventional approaches that require full regeneration for each modification. Additionally, Proc3D improves ULIP scores by 28%, a metric that evaluates the alignment between generated 3D models and text prompts. By enabling text-aligned 3D model generation along with precise, real-time parametric edits, Proc3D facilitates highly accurate text-based image editing applications.
toXiv_bot_toot
Replaced article(s) found for physics.optics. https://arxiv.org/list/physics.optics/new
[1/1]:
- LLM4Laser: Large Language Models Automate the Design of Lasers
Renjie Li, Ceyao Zhang, Sixuan Mao, Xiyuan Zhou, Feng Yin, Sergios Theodoridis, Zhaoyu Zhang
https://arxiv.org/abs/2104.12145
- Room-temperature valley-selective emission in Si-MoSe2 heterostructures enabled by high-quality-f...
Feng Pan, et al.
https://arxiv.org/abs/2409.09806 https://mastoxiv.page/@arXiv_physicsoptics_bot/113152185040115763
- 1T'-MoTe$_2$ as an integrated saturable absorber for photonic machine learning
Maria Carolina Volpato, Henrique G. Rosa, Tom Reep, Pierre-Louis de Assis, Newton Cesario Frateschi
https://arxiv.org/abs/2507.16140 https://mastoxiv.page/@arXiv_physicsoptics_bot/114901571498004090
- NeOTF: Guidestar-free neural representation for broadband dynamic imaging through scattering
Yunong Sun, Fei Xia
https://arxiv.org/abs/2507.22328 https://mastoxiv.page/@arXiv_physicsoptics_bot/114947052118796753
- Structured Random Models for Phase Retrieval with Optical Diffusers
Zhiyuan Hu, Fakhriyya Mammadova, Juli\'an Tachella, Michael Unser, Jonathan Dong
https://arxiv.org/abs/2510.14490 https://mastoxiv.page/@arXiv_physicsoptics_bot/115388901264416806
- Memory Effects in Time-Modulated Radiative Heat Transfer
Riccardo Messina, Philippe Ben-Abdallah
https://arxiv.org/abs/2510.19378 https://mastoxiv.page/@arXiv_physicsoptics_bot/115422659227231796
- Mie-tronics supermodes and symmetry breaking in nonlocal metasurfaces
Thanh Xuan Hoang, Ayan Nussupbekov, Jie Ji, Daniel Leykam, Jaime Gomez Rivas, Yuri Kivshar
https://arxiv.org/abs/2511.03560 https://mastoxiv.page/@arXiv_physicsoptics_bot/115502066008543828
- Integrated soliton microcombs beyond the turnkey limit
Wang, Xu, Wang, Zhu, Luo, Luo, Wang, Ni, Yang, Gong, Xiao, Li, Yang
https://arxiv.org/abs/2511.06909 https://mastoxiv.page/@arXiv_physicsoptics_bot/115530791701071777
- Ising accelerator with a reconfigurable interferometric photonic processor
Rausell-Campo, Al Kayed, P\'erez-L\'ppez, Aadhi, Shastri, Francoy
https://arxiv.org/abs/2511.13284 https://mastoxiv.page/@arXiv_physicsoptics_bot/115570439939074488
- Superradiance in dense atomic samples
I. M. de Ara\'ujo, H. Sanchez, L. F. Alves da Silva, M. H. Y. Moussa
https://arxiv.org/abs/2504.20242 https://mastoxiv.page/@arXiv_quantph_bot/114425762810828336
- Fluctuation-induced Hall-like lateral forces in a chiral-gain environment
Daigo Oue, M\'ario G. Silveirinha
https://arxiv.org/abs/2507.14754 https://mastoxiv.page/@arXiv_condmatmeshall_bot/114896308178114535
- Tensor-network approach to quantum optical state evolution beyond the Fock basis
Nikolay Kapridov, Egor Tiunov, Dmitry Chermoshentsev
https://arxiv.org/abs/2511.15295 https://mastoxiv.page/@arXiv_quantph_bot/115581390666689204
- OmniLens : Blind Lens Aberration Correction via Large LensLib Pre-Training and Latent PSF Repres...
Jiang, Qian, Gao, Sun, Yang, Yi, Li, Yang, Van Gool, Wang
https://arxiv.org/abs/2511.17126 https://mastoxiv.page/@arXiv_eessIV_bot/115603729319581340
toXiv_bot_toot
Learning to Build Shapes by Extrusion
Thor Vestergaard Christiansen, Karran Pandey, Alba Reinders, Karan Singh, Morten Rieger Hannemose, J. Andreas B{\ae}rentzen
https://arxiv.org/abs/2601.22858 https://arxiv.org/pdf/2601.22858 https://arxiv.org/html/2601.22858
arXiv:2601.22858v1 Announce Type: new
Abstract: We introduce Text Encoded Extrusion (TEE), a text-based representation that expresses mesh construction as sequences of face extrusions rather than polygon lists, and a method for generating 3D meshes from TEE using a large language model (LLM). By learning extrusion sequences that assemble a mesh, similar to the way artists create meshes, our approach naturally supports arbitrary output face counts and produces manifold meshes by design, in contrast to recent transformer-based models. The learnt extrusion sequences can also be applied to existing meshes - enabling editing in addition to generation. To train our model, we decompose a library of quadrilateral meshes with non-self-intersecting face loops into constituent loops, which can be viewed as their building blocks, and finetune an LLM on the steps for reassembling the meshes by performing a sequence of extrusions. We demonstrate that our representation enables reconstruction, novel shape synthesis, and the addition of new features to existing meshes.
toXiv_bot_toot