Manipulation of photonic topological edge and corner states via trivial claddings
Hai-Xiao Wang, Li Liang, Shuai Shao, Shiwei Tang, Junhui Hu, Yin Poo, Jian-Hua Jiang
https://arxiv.org/abs/2511.18705 https://arxiv.org/pdf/2511.18705 https://arxiv.org/html/2511.18705
arXiv:2511.18705v1 Announce Type: new
Abstract: Crystalline symmetry offers a powerful tool to realize photonic topological phases, in which additional trivial claddings are typically required to confine topological boundary states. However, the utility of the trivial cladding in manipulating topological waves is often overlooked. Here, we demonstrate two topologically distinct kagome photonic crystals (KPCs) based on different crystalline symmetries: \mathbit{C}_\mathbf{6}- symmetric KPCs exhibit a quantum spin Hall phase, while \mathbit{C}_\mathbf{3}-symmetric KPCs serve as trivial cladding. By tuning the geometric parameter of the trivial cladding, we observe that a pair of topological interface states featured with pseudospin-momentum locking undergoes a phase transition, accompanied by the appearance and disappearance of corner states in a finite hexagonal supercell. Such a geometry-induced band inversion is characterized by a sign change in the Dirac mass of the topological interface states and holds potential for applications such as rainbow trapping. Furthermore, we experimentally demonstrate the corner states, which is a hallmark of higher-order topology, also depend critically on the trivial cladding. Our work highlights the crucial role of trivial claddings on the formation of topological boundary states, and offers a novel approach for their manipulation.
toXiv_bot_toot
I've expanded a bit on my reflections about #inaturalist's push for using LLMs to regurgitate user-generated insights - and why replacing human peer-to-peer learning is likely harmful to people's motivations.
Read it here: #citizenscience
I learned on Saturday that Bari Weiss spiked our story,
INSIDE CECOT,
which was supposed to air tonight.
We (Ori and I) asked for a call to discuss her decision.
She did not afford us that courtesy/opportunity.
Our story was screened five times
and cleared by both CBS attorneys and Standards and Practices.
It is factually correct.
In my view, pulling it now
-after every rigorous internal check has been met
is not an editorial decis…
Sources: Weiss thinks that the existing 60 Minutes framework did not provide sufficient checks and balances to ensure that the reporting met Weiss' standards (Sara Fischer/Axios)
https://www.axios.com/2025/12/22/60-minutes-bari-weiss-cecot
Scattering in Time-Varying Drude-Lorentz Models
Bryce Dixon, Calvin M. Hooper, Ian R. Hooper, Simon A. R. Horsley
https://arxiv.org/abs/2511.19322 https://arxiv.org/pdf/2511.19322 https://arxiv.org/html/2511.19322
arXiv:2511.19322v1 Announce Type: new
Abstract: Motivated by recent experiments, the theoretical study of wave propagation in time varying materials is of current interest. Although significant in nearly all such experiments, material dispersion is commonly neglected in theoretical studies. Yet, as we show here, understanding the precise microscopic model for the material dispersion is crucial for predicting experimental outcomes. Here we study the temporal scattering coefficients of four different time-varying Drude-Lorentz models, exploring how an incident continuous wave splits into forward and backward waves due to an abrupt change in plasma frequency. The differences in the predicted scattering are unique to time-varying media, and arise from the exact way in which the time variation appears in the various model parameters. We verify our results using a custom finite difference time domain algorithm, concluding with a discussion of the limitations that arise from using these models with an abrupt change in plasma frequency.
toXiv_bot_toot
Exploiting ID-Text Complementarity via Ensembling for Sequential Recommendation
Liam Collins, Bhuvesh Kumar, Clark Mingxuan Ju, Tong Zhao, Donald Loveland, Leonardo Neves, Neil Shah
https://arxiv.org/abs/2512.17820 https://arxiv.org/pdf/2512.17820 https://arxiv.org/html/2512.17820
arXiv:2512.17820v1 Announce Type: new
Abstract: Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not.
toXiv_bot_toot
Just finished "Beasts Made of Night" by Tochi Onyebuchi...
Indirect CW for fantasy police state violence.
So I very much enjoyed Onyebuchi's "Riot Baby," and when I grabbed this at the library, I was certain it would be excellent. But having finished it, I'm not sure I like it that much overall?
The first maybe third is excellent, including the world-building, which is fascinating. I feel like Onyebuchi must have played "Shadow of the Colossus" at some point. Onyebuchi certainly does know how to make me care for his characters.
Some spoilers from here on out...
.
.
.
I felt like it stumbles towards the middle, with Bo's reactions neither making sense in the immediate context, nor in retrospect by the end when we've learned more. Things are a bit floaty in the middle with an unclear picture of what exactly is going on politics-wise and what the motivations are. Here I think there were some nuances that didn't make it to the page, or perhaps I'm just a bit thick and not getting stuff I should be? More is of course revealed by the end, but I still wasn't satisfied with the explanations of things. For example, (spoilers) I don't feel I understand clearly what kind of power the army of aki was supposed to represent within the city? Perhaps necessary to wield the threat of offensive inisisia use? In that case, a single scene somewhere of Izu's faction deploying that tactic would have been helpful I think.
Then towards the end, for me things really started to jumble, with unclear motivations, revelations that didn't feel well-paced or -structured, and a finale where both the action & collapsing concerns felt stilted and disjointed. Particularly the mechanics/ethics of the most important death that set the finale in motion bothered me, and the unexplained mechanism by which that led to what came next? I can read a couple of possible interesting morals into the whole denouement, but didn't feel that any of them were sufficiently explored. Especially if we're supposed to see some personal failing in the protagonist's actions, I don't think it's made clear enough what that is, since I feel his reasons to reject each faction are pretty solid, and if we're meant to either pity or abjure his indecision, I don't think the message lands clearly enough.
There *is* a sequel, which honestly I wasn't sure of after the last page, and which I now very interested in. Beasts is Onyebuchi's debut, which maybe makes sense of me feeling that Riot Baby didn't have the same plotting issues. It also maybe means that Onyebuchi couldn't be sure a sequel would make it to publication in terms of setting up the ending.
Overall I really enjoyed at least 80% of this, but was expecting even better (especially politically) given Onyebuchi's other work, and I didn't feel like I found it.
#AmReading
Analyzing and Internalizing Complex Policy Documents for LLM Agents
Jiateng Liu, Zhenhailong Wang, Xiaojiang Huang, Yingjie Li, Xing Fan, Xiang Li, Chenlei Guo, Ruhi Sarikaya, Heng Ji
https://arxiv.org/abs/2510.11588
I found this reflection of one community member in the forum interesting: «Every time I read that AI is better, faster etc. it lowers my motivation to identify. It makes me feel that what I do when identifying (mostly unknowns) is not going to be useful any longer.»
My take is that this development is par for the course once organizations "professionalize" — and thus start to focus more on keeping their staff salaries than on their mission & working with their volunteer community…
#inaturalist
Easy Adaptation: An Efficient Task-Specific Knowledge Injection Method for Large Models in Resource-Constrained Environments
Dong Chen, Zhengqing Hu, Shixing Zhao, Yibo Guo
https://arxiv.org/abs/2512.17771 https://arxiv.org/pdf/2512.17771 https://arxiv.org/html/2512.17771
arXiv:2512.17771v1 Announce Type: new
Abstract: While the enormous parameter scale endows Large Models (LMs) with unparalleled performance, it also limits their adaptability across specific tasks. Parameter-Efficient Fine-Tuning (PEFT) has emerged as a critical approach for effectively adapting LMs to a diverse range of downstream tasks. However, existing PEFT methods face two primary challenges: (1) High resource cost. Although PEFT methods significantly reduce resource demands compared to full fine-tuning, it still requires substantial time and memory, making it impractical in resource-constrained environments. (2) Parameter dependency. PEFT methods heavily rely on updating a subset of parameters associated with LMs to incorporate task-specific knowledge. Yet, due to increasing competition in the LMs landscape, many companies have adopted closed-source policies for their leading models, offering access only via Application Programming Interface (APIs). Whereas, the expense is often cost-prohibitive and difficult to sustain, as the fine-tuning process of LMs is extremely slow. Even if small models perform far worse than LMs in general, they can achieve superior results on particular distributions while requiring only minimal resources. Motivated by this insight, we propose Easy Adaptation (EA), which designs Specific Small Models (SSMs) to complement the underfitted data distribution for LMs. Extensive experiments show that EA matches the performance of PEFT on diverse tasks without accessing LM parameters, and requires only minimal resources.
toXiv_bot_toot