Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@@arXiv_physicsatomph_bot@mastoxiv.page@mastoxiv.page
2025-12-09 09:21:08

Determination of nuclear quadrupole moments of $^{25}$Mg, $^{87}$Sr, and $^{135,137}$Ba via configuration-interaction plus coupled-cluster approach
Yong-Bo Tang
arxiv.org/abs/2512.07603 arxiv.org/pdf/2512.07603 arxiv.org/html/2512.07603
arXiv:2512.07603v1 Announce Type: new
Abstract: Using the configuration-interaction plus coupled-cluster approach, we calculate the electric-field gradients $q$ for the low-lying states of alkaline-earth atoms, including magnesium (Mg), strontium (Sr), and barium (Ba). These low-lying states specifically include the $3s3p~^3\!P_{1,2}$ states of Mg; the $5s4d~^1\!D_{2}$ and $5s5p~^3\!P_{1,2}$ states of Sr; as well as the $6s5d~^3\!D_{1,2,3}$, $6s5d~^1\!D_{2}$, and $6s6p~^1\!P_{1}$ states of Ba. By combining the measured electric quadrupole hyperfine-structure constants of these states, we accurately determine the nuclear quadrupole moments of $^{25}$Mg, $^{87}$Sr, and $^{135,137}$Ba. These results are compared with the available data. The comparison shows that our nuclear quadrupole moment of $^{25}$Mg is in perfect agreement with the result from the mesonic X-ray experiment. However, there are approximately 10\% and 4\% differences between our results and the currently adopted values [Pyykk$\rm \ddot{o}$, Mol. Phys. 116, 1328(2018)] for the nuclear quadrupole moments of $^{87}$Sr and $^{135,137}$Ba respectively. Moreover, we also calculate the magnetic dipole hyperfine-structure constants of these states, and the calculated results exhibit good agreement with the measured data.
toXiv_bot_toot

@stf@chaos.social
2025-11-26 17:06:26

by accident i stumbled on this review by the #NSA on Bruce Schneiers "Applied Crypto" book from long ago.

9. BOOK REVIEW: APPLIED CRYPTOGRAPHY [censored] Reviewer

Applied Cryptography, for those who don't read the internet news, is a
book written by Bruce Schneier last year. According to the jacket,
Schneier is a data security expert with a master's degree in computer
science. According to his followers, he is a hero who has finally
brought together the loose threads of cryptography for the general
public to understand. Schneier has gathered academic research, internet
gossip, and everything he co…
Issue 1 TALES OF THE KRYPT Page 14 of 16
oc ID: 6823780

Playing loose with the facts is a serious problem with Schneier. For
example in discussing a small-exponent attack on RSA, he says "an
attack by Michael Wiener will recover e when e is up to one quarter the
size of n." Actually, Wiener's attack recovers the secret exponent d
when e has less than one quarter as many bits as n, which is a quite
different statement. Or: "The quadratic sieve is the fastest known .
algorithm for factoring numb…
@arXiv_physicsoptics_bot@mastoxiv.page
2025-11-25 11:11:43

High-precision luminescence cryothermometry strategy by using hyperfine structure
Marina N. Popova, Mosab Diab, Boris Z. Malkin
arxiv.org/abs/2511.19088 arxiv.org/pdf/2511.19088 arxiv.org/html/2511.19088
arXiv:2511.19088v1 Announce Type: new
Abstract: A novel, to the best of our knowledge, ultralow-temperature luminescence thermometry strategy is proposed, based on a measurement of relative intensities of hyperfine components in the spectra of Ho$^{3 }$ ions doped into a crystal. A $^{7}$LiYF$_4$:Ho$^{3 }$ crystal is chosen as an example. First, we show that temperatures in the range 10-35 K can be measured using the Boltzmann behavior of the populations of crystal-field levels separated by an energy interval of 23 cm$^{-1}$. Then we select the 6089 cm$^{-1}$ line of the holmium $^5I_5 \rightarrow ^5I_7$ transition, which has a well-resolved hyperfine structure and falls within the transparency window of optical fibers (telecommunication S band), to demonstrate the possibility of measuring temperatures below 3 K. The temperature $T$ is determined by a least-squares fit to the measured intensities of all eight hyperfine components using the dependence $I(\nu) = I_1 \exp(-b\nu)$, where $I_1$ and $b = a\nu \frac{\nu}{kT}$ are fitting parameters and a accounts for intensity variations due to mixing of wave functions of different crystal-field levels by the hyperfine interaction. In this method, the absolute and relative thermal sensitivities grow at $T$ approaching zero as $\frac{1}{T^2}$.and $\frac{1}{T}$, respectively. We theoretically considered the intensity distributions within hyperfine manifolds and compared the results with experimental data. Application of the method to experimentally measured relative intensities of hyperfine components of the 6089 cm$^{-1}$ PL line yielded $T = 3.7 \pm 0.2$ K. For a temperature of 1 K, an order of magnitude better accuracy is expected.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 10:34:50

Regularized Random Fourier Features and Finite Element Reconstruction for Operator Learning in Sobolev Space
Xinyue Yu, Hayden Schaeffer
arxiv.org/abs/2512.17884 arxiv.org/pdf/2512.17884 arxiv.org/html/2512.17884
arXiv:2512.17884v1 Announce Type: new
Abstract: Operator learning is a data-driven approximation of mappings between infinite-dimensional function spaces, such as the solution operators of partial differential equations. Kernel-based operator learning can offer accurate, theoretically justified approximations that require less training than standard methods. However, they can become computationally prohibitive for large training sets and can be sensitive to noise. We propose a regularized random Fourier feature (RRFF) approach, coupled with a finite element reconstruction map (RRFF-FEM), for learning operators from noisy data. The method uses random features drawn from multivariate Student's $t$ distributions, together with frequency-weighted Tikhonov regularization that suppresses high-frequency noise. We establish high-probability bounds on the extreme singular values of the associated random feature matrix and show that when the number of features $N$ scales like $m \log m$ with the number of training samples $m$, the system is well-conditioned, which yields estimation and generalization guarantees. Detailed numerical experiments on benchmark PDE problems, including advection, Burgers', Darcy flow, Helmholtz, Navier-Stokes, and structural mechanics, demonstrate that RRFF and RRFF-FEM are robust to noise and achieve improved performance with reduced training time compared to the unregularized random feature model, while maintaining competitive accuracy relative to kernel and neural operator tests.
toXiv_bot_toot

@idbrii@mastodon.gamedev.place
2025-12-23 04:42:05

Concept artists give informed critiques of using GenAI to concept. Fav quote:
“When I show someone a rough sketch they see it differently than I do. They’re not seeing the sketch, they’re seeing the potential for what the sketch could be through their own taste and experiences, and it sparks all kinds of wonderful ideas that I would’ve never thought of on my own.”
Concept Artists Say Generative AI References Only Make Their Jobs Harder

@arXiv_physicsoptics_bot@mastoxiv.page
2025-11-25 10:53:53

MOCLIP: A Foundation Model for Large-Scale Nanophotonic Inverse Design
S. Rodionov, A. Burguete-Lopez, M. Makarenko, Q. Wang, F. Getman, A. Fratalocchi
arxiv.org/abs/2511.18980 arxiv.org/pdf/2511.18980 arxiv.org/html/2511.18980
arXiv:2511.18980v1 Announce Type: new
Abstract: Foundation models (FM) are transforming artificial intelligence by enabling generalizable, data-efficient solutions across different domains for a broad range of applications. However, the lack of large and diverse datasets limits the development of FM in nanophotonics. This work presents MOCLIP (Metasurface Optics Contrastive Learning Pretrained), a nanophotonic foundation model that integrates metasurface geometry and spectra within a shared latent space. MOCLIP employs contrastive learning to align geometry and spectral representations using an experimentally acquired dataset with a sample density comparable to ImageNet-1K. The study demonstrates MOCLIP inverse design capabilities for high-throughput zero-shot prediction at a rate of 0.2 million samples per second, enabling the design of a full 4-inch wafer populated with high-density metasurfaces in minutes. It also shows generative latent-space optimization reaching 97 percent accuracy. Finally, we introduce an optical information storage concept that uses MOCLIP to achieve a density of 0.1 Gbit per square millimeter at the resolution limit, exceeding commercial optical media by a factor of six. These results position MOCLIP as a scalable and versatile platform for next-generation photonic design and data-driven applications.
toXiv_bot_toot

@arXiv_csLG_bot@mastoxiv.page
2025-12-22 11:50:19

Crosslisted article(s) found for cs.LG. arxiv.org/list/cs.LG/new
[1/3]:
- Optimizing Text Search: A Novel Pattern Matching Algorithm Based on Ukkonen's Approach
Xinyu Guan, Shaohua Zhang
arxiv.org/abs/2512.16927 mastoxiv.page/@arXiv_csDS_bot/
- SpIDER: Spatially Informed Dense Embedding Retrieval for Software Issue Localization
Shravan Chaudhari, Rahul Thomas Jacob, Mononito Goswami, Jiajun Cao, Shihab Rashid, Christian Bock
arxiv.org/abs/2512.16956 mastoxiv.page/@arXiv_csSE_bot/
- MemoryGraft: Persistent Compromise of LLM Agents via Poisoned Experience Retrieval
Saksham Sahai Srivastava, Haoyu He
arxiv.org/abs/2512.16962 mastoxiv.page/@arXiv_csCR_bot/
- Colormap-Enhanced Vision Transformers for MRI-Based Multiclass (4-Class) Alzheimer's Disease Clas...
Faisal Ahmed
arxiv.org/abs/2512.16964 mastoxiv.page/@arXiv_eessIV_bo
- Probing Scientific General Intelligence of LLMs with Scientist-Aligned Workflows
Wanghan Xu, et al.
arxiv.org/abs/2512.16969 mastoxiv.page/@arXiv_csAI_bot/
- PAACE: A Plan-Aware Automated Agent Context Engineering Framework
Kamer Ali Yuksel
arxiv.org/abs/2512.16970 mastoxiv.page/@arXiv_csAI_bot/
- A Women's Health Benchmark for Large Language Models
Elisabeth Gruber, et al.
arxiv.org/abs/2512.17028 mastoxiv.page/@arXiv_csCL_bot/
- Perturb Your Data: Paraphrase-Guided Training Data Watermarking
Pranav Shetty, Mirazul Haque, Petr Babkin, Zhiqiang Ma, Xiaomo Liu, Manuela Veloso
arxiv.org/abs/2512.17075 mastoxiv.page/@arXiv_csCL_bot/
- Disentangled representations via score-based variational autoencoders
Benjamin S. H. Lyo, Eero P. Simoncelli, Cristina Savin
arxiv.org/abs/2512.17127 mastoxiv.page/@arXiv_statML_bo
- Biosecurity-Aware AI: Agentic Risk Auditing of Soft Prompt Attacks on ESM-Based Variant Predictors
Huixin Zhan
arxiv.org/abs/2512.17146 mastoxiv.page/@arXiv_csCR_bot/
- Application of machine learning to predict food processing level using Open Food Facts
Arora, Chauhan, Rana, Aditya, Bhagat, Kumar, Kumar, Semar, Singh, Bagler
arxiv.org/abs/2512.17169 mastoxiv.page/@arXiv_qbioBM_bo
- Systemic Risk Radar: A Multi-Layer Graph Framework for Early Market Crash Warning
Sandeep Neela
arxiv.org/abs/2512.17185 mastoxiv.page/@arXiv_qfinRM_bo
- Do Foundational Audio Encoders Understand Music Structure?
Keisuke Toyama, Zhi Zhong, Akira Takahashi, Shusuke Takahashi, Yuki Mitsufuji
arxiv.org/abs/2512.17209 mastoxiv.page/@arXiv_csSD_bot/
- CheXPO-v2: Preference Optimization for Chest X-ray VLMs with Knowledge Graph Consistency
Xiao Liang, Yuxuan An, Di Wang, Jiawei Hu, Zhicheng Jiao, Bin Jing, Quan Wang
arxiv.org/abs/2512.17213 mastoxiv.page/@arXiv_csCV_bot/
- Machine Learning Assisted Parameter Tuning on Wavelet Transform Amorphous Radial Distribution Fun...
Deriyan Senjaya, Stephen Ekaputra Limantoro
arxiv.org/abs/2512.17245 mastoxiv.page/@arXiv_condmatmt
- AlignDP: Hybrid Differential Privacy with Rarity-Aware Protection for LLMs
Madhava Gaikwad
arxiv.org/abs/2512.17251 mastoxiv.page/@arXiv_csCR_bot/
- Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning
Baolei Zhang, Minghong Fang, Zhuqing Liu, Biao Yi, Peizhao Zhou, Yuan Wang, Tong Li, Zheli Liu
arxiv.org/abs/2512.17254 mastoxiv.page/@arXiv_csCR_bot/
- Verifiability-First Agents: Provable Observability and Lightweight Audit Agents for Controlling A...
Abhivansh Gupta
arxiv.org/abs/2512.17259 mastoxiv.page/@arXiv_csMA_bot/
- Warmer for Less: A Cost-Efficient Strategy for Cold-Start Recommendations at Pinterest
Saeed Ebrahimi, Weijie Jiang, Jaewon Yang, Olafur Gudmundsson, Yucheng Tu, Huizhong Duan
arxiv.org/abs/2512.17277 mastoxiv.page/@arXiv_csIR_bot/
- LibriVAD: A Scalable Open Dataset with Deep Learning Benchmarks for Voice Activity Detection
Ioannis Stylianou, Achintya kr. Sarkar, Nauman Dawalatabad, James Glass, Zheng-Hua Tan
arxiv.org/abs/2512.17281 mastoxiv.page/@arXiv_csSD_bot/
- Penalized Fair Regression for Multiple Groups in Chronic Kidney Disease
Carter H. Nakamoto, Lucia Lushi Chen, Agata Foryciarz, Sherri Rose
arxiv.org/abs/2512.17340 mastoxiv.page/@arXiv_statME_bo
toXiv_bot_toot