Tootfinder

Opt-in global Mastodon full text search. Join the index!

No exact results. Similar results found.
@migueldeicaza@mastodon.social
2024-04-30 02:32:42

middleeasteye.net/news/exclusi

Recently, Mr. Lawrence said, customers have been snapping up used Teslas for a little over $20,000, after applying a $4,000 federal tax credit.
“We’re seeing younger people,” Mr. Lawrence said.
“We are seeing more blue-collar and entry-level white-collar people.
The purchase price of the car has suddenly become in reach.”
Regarded by conservative politicians and other critics as playthings of the liberal elite,
electric vehicles are fast becoming more accessible.…

@nuthatch@infosec.exchange
2024-06-04 05:04:37

“Electric Cars Are Suddenly Becoming Affordable.
More efficient manufacturing, falling battery costs and intense competition are lowering sticker prices for battery-powered models to within striking distance of gasoline cars.
Competition is also intensifying. Toyota and other Japanese carmakers with a reputation for delivering reliable and affordable vehicles are belatedly offering electric vehicles. Honda plans to begin producing them at an Ohio factory next year.”
nytimes.com/2024/06/03/busines 🎁

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2024-04-04 07:14:15

Evolution of Berry Phase and Half-Metallicity in Cr$_2$Te$_3$ in Response to Strain, Filling, Thickness, and Surface Termination
Sohee Kwon, Yuhang Liu, Hang Chi, Gen Yin, Mahesh R. Neupane, Roger K. Lake
arxiv.org/abs/2404.02315

@arXiv_statME_bot@mastoxiv.page
2024-04-03 07:10:52

Supporting Bayesian modelling workflows with iterative filtering for multiverse analysis
Anna Elisabeth Riha, Nikolas Siccha, Antti Oulasvirta, Aki Vehtari
arxiv.org/abs/2404.01688

@arXiv_mathAG_bot@mastoxiv.page
2024-06-03 08:39:32

This arxiv.org/abs/2312.13755 has been replaced.
initial toot: mastoxiv.page/@arXiv_mat…

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 06:49:12

Better & Faster Large Language Models via Multi-token Prediction
Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozi\`ere, David Lopez-Paz, Gabriel Synnaeve
arxiv.org/abs/2404.19737 arxiv.org/pdf/2404.19737
arXiv:2404.19737v1 Announce Type: new
Abstract: Large language models such as GPT and Llama are trained with a next-token prediction loss. In this work, we suggest that training language models to predict multiple future tokens at once results in higher sample efficiency. More specifically, at each position in the training corpus, we ask the model to predict the following n tokens using n independent output heads, operating on top of a shared model trunk. Considering multi-token prediction as an auxiliary training task, we measure improved downstream capabilities with no overhead in training time for both code and natural language models. The method is increasingly useful for larger model sizes, and keeps its appeal when training for multiple epochs. Gains are especially pronounced on generative benchmarks like coding, where our models consistently outperform strong baselines by several percentage points. Our 13B parameter models solves 12 % more problems on HumanEval and 17 % more on MBPP than comparable next-token models. Experiments on small algorithmic tasks demonstrate that multi-token prediction is favorable for the development of induction heads and algorithmic reasoning capabilities. As an additional benefit, models trained with 4-token prediction are up to 3 times faster at inference, even with large batch sizes.

@arXiv_csCV_bot@mastoxiv.page
2024-06-04 07:02:46

Automatic Fused Multimodal Deep Learning for Plant Identification
Alfreds Lapkovskis, Natalia Nefedova, Ali Beikmohammadi
arxiv.org/abs/2406.01455

@arXiv_csRO_bot@mastoxiv.page
2024-06-04 07:22:25

Evaluating Uncertainty-based Failure Detection for Closed-Loop LLM Planners
Zhi Zheng, Qian Feng, Hang Li, Alois Knoll, Jianxiang Feng
arxiv.org/abs/2406.00430

@arXiv_condmatmtrlsci_bot@mastoxiv.page
2024-04-04 07:14:15

Evolution of Berry Phase and Half-Metallicity in Cr$_2$Te$_3$ in Response to Strain, Filling, Thickness, and Surface Termination
Sohee Kwon, Yuhang Liu, Hang Chi, Gen Yin, Mahesh R. Neupane, Roger K. Lake
arxiv.org/abs/2404.02315