Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csCL_bot@mastoxiv.page
2024-03-07 06:51:01

A Measure for Transparent Comparison of Linguistic Diversity in Multilingual NLP Data Sets
Tanja Samardzic, Ximena Gutierrez, Christian Bentz, Steven Moran, Olga Pelloni
arxiv.org/abs/2403.03909

@arXiv_csSE_bot@mastoxiv.page
2024-05-06 08:30:18

This arxiv.org/abs/2401.01508 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSE_…

@arXiv_csCL_bot@mastoxiv.page
2024-05-06 08:26:47

This arxiv.org/abs/2403.11894 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-07 06:50:55

Impoverished Language Technology: The Lack of (Social) Class in NLP
Amanda Cercas Curry, Zeerak Talat, Dirk Hovy
arxiv.org/abs/2403.03874

@arXiv_csSD_bot@mastoxiv.page
2024-05-06 06:52:41

Unveiling the Potential of LLM-Based ASR on Chinese Open-Source Datasets
Xuelong Geng, Tianyi Xu, Kun Wei, Bingsheng Mu, Hongfei Xue, He Wang, Yangze Li, Pengcheng Guo, Yuhang Dai, Longhao Li, Mingchen Shao, Lei Xie
arxiv.org/abs/2405.02132

@arXiv_csCL_bot@mastoxiv.page
2024-05-06 08:26:26

This arxiv.org/abs/2310.05597 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-03-07 08:25:31

This arxiv.org/abs/2402.18061 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@felwert@mstdn.social
2024-02-22 11:04:46

Today someone contacted me about TCFlib, my Python package for working with the #WebLicht #NLP service. I developed it mainly for internal use a couple of years back, and it never got too much traction in the WebLicht community, but I’m still happy to hear if it proved useful for somebody.

@arXiv_csLG_bot@mastoxiv.page
2024-05-02 07:18:11

Navigating WebAI: Training Agents to Complete Web Tasks with Large Language Models and Reinforcement Learning
Lucas-Andre\"i Thil, Mirela Popa, Gerasimos Spanakis
arxiv.org/abs/2405.00516

@tschfflr@fediscience.org
2024-04-18 14:07:31

Cool program of the next workshop for #Ukrainian #NLP ! #nlproc

@arXiv_csAI_bot@mastoxiv.page
2024-03-27 06:46:41

ALISA: Accelerating Large Language Model Inference via Sparsity-Aware KV Caching
Youpeng Zhao, Di Wu, Jun Wang
arxiv.org/abs/2403.17312

@arXiv_csIR_bot@mastoxiv.page
2024-05-03 06:50:19

"In-Context Learning" or: How I learned to stop worrying and love "Applied Information Retrieval"
Andrew Parry, Debasis Ganguly, Manish Chandra
arxiv.org/abs/2405.01116

@arXiv_csDL_bot@mastoxiv.page
2024-02-20 07:32:39

Citation Amnesia: NLP and Other Academic Fields Are in a Citation Age Recession
Jan Philip Wahle, Terry Ruas, Mohamed Abdalla, Bela Gipp, Saif M. Mohammad
arxiv.org/abs/2402.12046

@arXiv_csCL_bot@mastoxiv.page
2024-03-07 08:25:11

This arxiv.org/abs/2401.11389 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCE_bot@mastoxiv.page
2024-02-27 06:47:17

ProLLaMA: A Protein Large Language Model for Multi-Task Protein Language Processing
Liuzhenghao Lv, Zongying Lin, Hao Li, Yuyang Liu, Jiaxi Cui, Calvin Yu-Chian Chen, Li Yuan, Yonghong Tian
arxiv.org/abs/2402.16445 arxiv.org/pdf/2402.16445
arXiv:2402.16445v1 Announce Type: new
Abstract: Large Language Models (LLMs), including GPT-x and LLaMA2, have achieved remarkable performance in multiple Natural Language Processing (NLP) tasks. Under the premise that protein sequences constitute the protein language, Protein Large Language Models (ProLLMs) trained on protein corpora excel at de novo protein sequence generation. However, as of now, unlike LLMs in NLP, no ProLLM is capable of multiple tasks in the Protein Language Processing (PLP) field. This prompts us to delineate the inherent limitations in current ProLLMs: (i) the lack of natural language capabilities, (ii) insufficient instruction understanding, and (iii) high training resource demands. To address these challenges, we introduce a training framework to transform any general LLM into a ProLLM capable of handling multiple PLP tasks. Specifically, our framework utilizes low-rank adaptation and employs a two-stage training approach, and it is distinguished by its universality, low overhead, and scalability. Through training under this framework, we propose the ProLLaMA model, the first known ProLLM to handle multiple PLP tasks simultaneously. Experiments show that ProLLaMA achieves state-of-the-art results in the unconditional protein sequence generation task. In the controllable protein sequence generation task, ProLLaMA can design novel proteins with desired functionalities. In the protein property prediction task, ProLLaMA achieves nearly 100\% accuracy across many categories. The latter two tasks are beyond the reach of other ProLLMs. Code is available at \url{github.com/Lyu6PosHao/ProLLaMA.

@arXiv_csHC_bot@mastoxiv.page
2024-05-01 07:17:17

Blind Spots and Biases: Exploring the Role of Annotator Cognitive Biases in NLP
Sanjana Gautam, Mukund Srinath
arxiv.org/abs/2404.19071 arxiv.org/pdf/2404.19071
arXiv:2404.19071v1 Announce Type: new
Abstract: With the rapid proliferation of artificial intelligence, there is growing concern over its potential to exacerbate existing biases and societal disparities and introduce novel ones. This issue has prompted widespread attention from academia, policymakers, industry, and civil society. While evidence suggests that integrating human perspectives can mitigate bias-related issues in AI systems, it also introduces challenges associated with cognitive biases inherent in human decision-making. Our research focuses on reviewing existing methodologies and ongoing investigations aimed at understanding annotation attributes that contribute to bias.

@arXiv_statME_bot@mastoxiv.page
2024-03-28 08:46:27

This arxiv.org/abs/2308.11138 has been replaced.
initial toot: mastoxiv.page/@arXiv_sta…

@arXiv_csCR_bot@mastoxiv.page
2024-03-18 08:30:47

This arxiv.org/abs/2401.01085 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCR_…

@arXiv_csSE_bot@mastoxiv.page
2024-02-28 06:53:01

Dealing with Data for RE: Mitigating Challenges using NLP and Generative AI
Smita Ghaisas, Anmol Singhal
arxiv.org/abs/2402.16977

@arXiv_csLG_bot@mastoxiv.page
2024-05-02 07:18:11

Navigating WebAI: Training Agents to Complete Web Tasks with Large Language Models and Reinforcement Learning
Lucas-Andre\"i Thil, Mirela Popa, Gerasimos Spanakis
arxiv.org/abs/2405.00516

@arXiv_csCL_bot@mastoxiv.page
2024-05-03 08:44:34

This arxiv.org/abs/2404.18759 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@lysander07@sigmoid.social
2024-04-19 16:44:23

5th Workshop on Patent Text Mining and Semantic Technologies (PatentSemTech2024) is co-organized by my colleagues from @…, co-located with SIGIR 2024.
paper deadline: April 25, 2024
website:

Screenshot of the webpage of 
PatentSemTech 2024,
5th Workshop on Patent Text Mining and Semantic Technologies

PatentSemTech aims to establish a long-term collaboration and a two-way communication channel between the IP industry and academia from relevant fields such as natural-language processing (NLP), text and data mining (TDM) and semantic technologies (ST) in order to explore and transfer new knowledge, methods and technologies for the benefit of industrial applications as well as support…
@arXiv_csAR_bot@mastoxiv.page
2024-03-01 06:46:55

Sustainable Supercomputing for AI: GPU Power Capping at HPC Scale
Dan Zhao, Siddharth Samsi, Joseph McDonald, Baolin Li, David Bestor, Michael Jones, Devesh Tiwari, Vijay Gadepally
arxiv.org/abs/2402.18593

@arXiv_csAI_bot@mastoxiv.page
2024-04-22 06:46:30

NLP-enabled trajectory map-matching in urban road networks using transformer sequence-to-sequence model
Sevin Mohammadi, Andrew W. Smyth
arxiv.org/abs/2404.12460

@arXiv_csCL_bot@mastoxiv.page
2024-04-04 08:33:02

This arxiv.org/abs/2403.19183 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csDC_bot@mastoxiv.page
2024-03-18 07:24:38

ATOM: Asynchronous Training of Massive Models for Deep Learning in a Decentralized Environment
Xiaofeng Wu, Jia Rao, Wei Chen
arxiv.org/abs/2403.10504

@arXiv_csIR_bot@mastoxiv.page
2024-04-03 06:50:13

Where to Move Next: Zero-shot Generalization of LLMs for Next POI Recommendation
Shanshan Feng, Haoming Lyu, Caishun Chen, Yew-Soon Ong
arxiv.org/abs/2404.01855

@arXiv_csRO_bot@mastoxiv.page
2024-04-26 07:14:36

Chat2Scenario: Scenario Extraction From Dataset Through Utilization of Large Language Model
Yongqi Zhao, Wenbo Xiao, Tomislav Mihalj, Jia Hu, Arno Eichberger
arxiv.org/abs/2404.16147

@arXiv_csIT_bot@mastoxiv.page
2024-02-27 08:21:40

This arxiv.org/abs/2308.06013 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIT_…

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 08:28:46

This arxiv.org/abs/2305.12829 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csSE_bot@mastoxiv.page
2024-04-19 08:32:57

This arxiv.org/abs/2304.10265 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSE_…

@arXiv_csDL_bot@mastoxiv.page
2024-04-03 06:54:32

Sentiment Analysis of Citations in Scientific Articles Using ChatGPT: Identifying Potential Biases and Conflicts of Interest
Walid Hariri
arxiv.org/abs/2404.01800

@arXiv_csHC_bot@mastoxiv.page
2024-04-30 07:24:11

Bridging the Social & Technical Divide in Augmentative and Alternative Communication (AAC) Applications for Autistic Adults
Lara J. Martin, Malathy Nagalakshmi
arxiv.org/abs/2404.17730

@arXiv_csCL_bot@mastoxiv.page
2024-04-05 08:30:52

This arxiv.org/abs/2403.09057 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csIR_bot@mastoxiv.page
2024-04-03 06:50:13

Where to Move Next: Zero-shot Generalization of LLMs for Next POI Recommendation
Shanshan Feng, Haoming Lyu, Caishun Chen, Yew-Soon Ong
arxiv.org/abs/2404.01855

@arXiv_csCL_bot@mastoxiv.page
2024-04-05 08:30:50

This arxiv.org/abs/2403.04182 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csDC_bot@mastoxiv.page
2024-03-18 07:24:38

ATOM: Asynchronous Training of Massive Models for Deep Learning in a Decentralized Environment
Xiaofeng Wu, Jia Rao, Wei Chen
arxiv.org/abs/2403.10504

@arXiv_csCL_bot@mastoxiv.page
2024-04-01 08:29:57

This arxiv.org/abs/2401.05632 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csRO_bot@mastoxiv.page
2024-04-26 07:14:36

Chat2Scenario: Scenario Extraction From Dataset Through Utilization of Large Language Model
Yongqi Zhao, Wenbo Xiao, Tomislav Mihalj, Jia Hu, Arno Eichberger
arxiv.org/abs/2404.16147

@arXiv_csIR_bot@mastoxiv.page
2024-02-28 06:50:29

Natural Language Processing Methods for Symbolic Music Generation and Information Retrieval: a Survey
Dinh-Viet-Toan Le, Louis Bigo, Mikaela Keller, Dorien Herremans
arxiv.org/abs/2402.17467

@arXiv_csCL_bot@mastoxiv.page
2024-03-01 06:53:30

Improving Legal Judgement Prediction in Romanian with Long Text Encoders
Mihai Masala, Traian Rebedea, Horia Velicu
arxiv.org/abs/2402.19170

@arXiv_csSE_bot@mastoxiv.page
2024-02-29 08:35:45

This arxiv.org/abs/2402.16977 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSE_…

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 06:51:15

HOP to the Next Tasks and Domains for Continual Learning in NLP
Umberto Michieli, Mete Ozay
arxiv.org/abs/2402.18449

@arXiv_csHC_bot@mastoxiv.page
2024-03-28 07:12:56

Eternagram: Probing Player Attitudes in Alternate Climate Scenarios Through a ChatGPT-Driven Text Adventure
Suifang Zhou, Latisha Besariani Hendra, Qinshi Zhang, Jussi Holopainen, RAY LC
arxiv.org/abs/2403.18160

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 08:33:41

This arxiv.org/abs/2404.18286 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csIR_bot@mastoxiv.page
2024-03-01 08:33:55

This arxiv.org/abs/2308.11131 has been replaced.
initial toot: mastoxiv.page/@arXiv_csIR_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-04 08:30:52

This arxiv.org/abs/2402.18678 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-04 08:30:52

This arxiv.org/abs/2402.18678 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 06:49:01

RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language Processing
Yucheng Hu, Yuxing Lu
arxiv.org/abs/2404.19543 arxiv.org/pdf/2404.19543
arXiv:2404.19543v1 Announce Type: new
Abstract: Large Language Models (LLMs) have catalyzed significant advancements in Natural Language Processing (NLP), yet they encounter challenges such as hallucination and the need for domain-specific knowledge. To mitigate these, recent methodologies have integrated information retrieved from external resources with LLMs, substantially enhancing their performance across NLP tasks. This survey paper addresses the absence of a comprehensive overview on Retrieval-Augmented Language Models (RALMs), both Retrieval-Augmented Generation (RAG) and Retrieval-Augmented Understanding (RAU), providing an in-depth examination of their paradigm, evolution, taxonomy, and applications. The paper discusses the essential components of RALMs, including Retrievers, Language Models, and Augmentations, and how their interactions lead to diverse model structures and applications. RALMs demonstrate utility in a spectrum of tasks, from translation and dialogue systems to knowledge-intensive applications. The survey includes several evaluation methods of RALMs, emphasizing the importance of robustness, accuracy, and relevance in their assessment. It also acknowledges the limitations of RALMs, particularly in retrieval quality and computational efficiency, offering directions for future research. In conclusion, this survey aims to offer a structured insight into RALMs, their potential, and the avenues for their future development in NLP. The paper is supplemented with a Github Repository containing the surveyed works and resources for further study: github.com/2471023025/RALM_Sur.

@arXiv_csSE_bot@mastoxiv.page
2024-02-13 12:55:05

Designing NLP-based solutions for requirements variability management: experiences from a design science study at Visma
Parisa Elahidoost, Michael Unterkalmsteiner, Davide Fucci, Peter Liljenberg, Jannik Fischbach
arxiv.org/abs/2402.07145

@arXiv_csCL_bot@mastoxiv.page
2024-03-04 07:26:51

A Semantic Distance Metric Learning approach for Lexical Semantic Change Detection
Taichi Aida, Danushka Bollegala
arxiv.org/abs/2403.00226

@arXiv_csIR_bot@mastoxiv.page
2024-02-29 08:33:17

This arxiv.org/abs/2212.06540 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-03-04 07:27:30

Your Model Is Not Predicting Depression Well And That Is Why: A Case Study of PRIMATE Dataset
Kirill MilintsevichUniversity of Caen Normandy, University of Tartu, Kairit SirtsUniversity of Tartu, Ga\"el DiasUniversity of Caen Normandy
arxiv.org/abs/2403.00438

@arXiv_csCL_bot@mastoxiv.page
2024-03-04 07:27:03

Gender Bias in Large Language Models across Multiple Languages
Jinman Zhao, Yitian Ding, Chen Jia, Yining Wang, Zifan Qian
arxiv.org/abs/2403.00277

@arXiv_csSE_bot@mastoxiv.page
2024-02-26 08:33:40

This arxiv.org/abs/2308.08784 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSE_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-04 07:27:44

Surveying the Dead Minds: Historical-Psychological Text Analysis with Contextualized Construct Representation (CCR) for Classical Chinese
Yuqi Chen, Sixuan Li, Ying Li, Mohammad Atari
arxiv.org/abs/2403.00509

@arXiv_csSE_bot@mastoxiv.page
2024-02-26 08:33:45

This arxiv.org/abs/2310.03128 has been replaced.
initial toot: mastoxiv.page/@arXiv_csSE_…

@arXiv_csCL_bot@mastoxiv.page
2024-05-03 07:16:47

Analyzing the Role of Semantic Representations in the Era of Large Language Models
Zhijing Jin, Yuen Chen, Fernando Gonzalez, Jiarui Liu, Jiayi Zhang, Julian Michael, Bernhard Sch\"olkopf, Mona Diab
arxiv.org/abs/2405.01502

@arXiv_csCL_bot@mastoxiv.page
2024-05-03 08:45:15

This arxiv.org/abs/2405.00289 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csIR_bot@mastoxiv.page
2024-02-27 08:21:45

This arxiv.org/abs/2312.12430 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-05-03 08:44:49

This arxiv.org/abs/2404.19048 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-05-03 07:15:47

Modeling Empathetic Alignment in Conversation
Jiamin Yang, David Jurgens
arxiv.org/abs/2405.00948 arxiv.org/pdf/2405.…

@arXiv_csCL_bot@mastoxiv.page
2024-03-22 06:54:55

A Taxonomy of Ambiguity Types for NLP
Margaret Y. Li, Alisa Liu, Zhaofeng Wu, Noah A. Smith
arxiv.org/abs/2403.14072

@arXiv_csIR_bot@mastoxiv.page
2024-03-29 06:50:08

Towards LLM-RecSys Alignment with Textual ID Learning
Juntao Tan, Shuyuan Xu, Wenyue Hua, Yingqiang Ge, Zelong Li, Yongfeng Zhang
arxiv.org/abs/2403.19021

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 07:33:07

ReproHum #0087-01: Human Evaluation Reproduction Report for Generating Fact Checking Explanations
Tyler Loakman, Chenghua Lin
arxiv.org/abs/2404.17481

@arXiv_csCL_bot@mastoxiv.page
2024-05-03 07:16:15

It Couldn't Help But Overhear: On the Limits of Modelling Meta-Communicative Grounding Acts with Supervised Learning
Brielen Madureira, David Schlangen
arxiv.org/abs/2405.01139

@arXiv_csCL_bot@mastoxiv.page
2024-05-01 08:32:41

This arxiv.org/abs/2208.08690 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-04-01 08:29:53

This arxiv.org/abs/2311.08590 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-01 08:32:38

This arxiv.org/abs/2402.14614 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-04-01 08:30:25

This arxiv.org/abs/2403.19432 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-04-01 08:30:13

This arxiv.org/abs/2403.07726 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-01 06:53:44

Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge
Ansh Arora, Xuanli He, Maximilian Mozes, Srinibas Swain, Mark Dras, Qiongkai Xu
arxiv.org/abs/2402.19334

@arXiv_csCL_bot@mastoxiv.page
2024-03-19 07:19:15

From explainable to interpretable deep learning for natural language processing in healthcare: how far from reality?
Guangming Huang, Yunfei Long, Yingya Li, Giorgos Papanastasiou
arxiv.org/abs/2403.11894

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 08:32:15

This arxiv.org/abs/2402.00838 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 06:51:04

Tokenization Is More Than Compression
Craig W. Schmidt, Varshini Reddy, Haoran Zhang, Alec Alameddine, Omri Uzan, Yuval Pinter, Chris Tanner
arxiv.org/abs/2402.18376

@arXiv_csCL_bot@mastoxiv.page
2024-02-15 08:30:29

This arxiv.org/abs/2402.04222 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-04-15 08:30:06

This arxiv.org/abs/2312.06499 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-02-23 06:56:00

Malaysian English News Decoded: A Linguistic Resource for Named Entity and Relation Extraction
Mohan Raj Chanthran, Lay-Ki Soon, Huey Fang Ong, Bhawani Selvaretnam
arxiv.org/abs/2402.14521

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 06:50:39

DANSK and DaCy 2.6.0: Domain Generalization of Danish Named Entity Recognition
Kenneth Enevoldsen, Emil Trenckner Jessen, Rebekah Baglini
arxiv.org/abs/2402.18209

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 06:50:56

Towards Better Understanding of Contrastive Sentence Representation Learning: A Unified Paradigm for Gradient
Mingxin Li, Richong Zhang, Zhijie Nie
arxiv.org/abs/2402.18281

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 07:32:59

Can a Multichoice Dataset be Repurposed for Extractive Question Answering?
Teresa Lynn, Malik H. Altakrori, Samar Mohamed Magdy, Rocktim Jyoti Das, Chenyang Lyu, Mohamed Nasr, Younes Samih, Alham Fikri Aji, Preslav Nakov, Shantanu Godbole, Salim Roukos, Radu Florian, Nizar Habash
arxiv.org/abs/2404.17342

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 08:28:42

This arxiv.org/abs/2208.08690 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 06:50:27

Learning Intrinsic Dimension via Information Bottleneck for Explainable Aspect-based Sentiment Analysis
Zhenxiao Cheng, Jie Zhou, Wen Wu, Qin Chen, Liang He
arxiv.org/abs/2402.18145

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 08:29:17

This arxiv.org/abs/2402.12819 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 06:50:37

Challenges in Pre-Training Graph Neural Networks for Context-Based Fake News Detection: An Evaluation of Current Strategies and Resource Limitations
Gregor Donabauer, Udo Kruschwitz
arxiv.org/abs/2402.18179

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 08:29:13

This arxiv.org/abs/2402.08015 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 07:32:47

TIGQA:An Expert Annotated Question Answering Dataset in Tigrinya
Hailay Teklehaymanot, Dren Fazlija, Niloy Ganguly, Gourab K. Patro, Wolfgang Nejdl
arxiv.org/abs/2404.17194

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 07:32:53

Reinforcement Retrieval Leveraging Fine-grained Feedback for Fact Checking News Claims with Black-Box LLM
Xuan Zhang, Wei Gao
arxiv.org/abs/2404.17283

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 07:32:51

Prompting Techniques for Reducing Social Bias in LLMs through System 1 and System 2 Cognitive Processes
Mahammed Kamruzzaman, Gene Louis Kim
arxiv.org/abs/2404.17218

@arXiv_csCL_bot@mastoxiv.page
2024-02-29 08:32:01

This arxiv.org/abs/2312.01661 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-04-29 08:28:47

This arxiv.org/abs/2307.05052 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-20 08:28:12

This arxiv.org/abs/2403.07311 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-04-26 08:31:23

This arxiv.org/abs/2404.12096 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-02-26 08:30:53

This arxiv.org/abs/2402.14208 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-03-28 08:28:27

This arxiv.org/abs/2403.16432 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-02-16 08:30:34

This arxiv.org/abs/2402.08638 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-02-15 08:30:43

This arxiv.org/abs/2402.08638 has been replaced.
initial toot: mastoxiv.page/@arXiv_csCL_…

@arXiv_csCL_bot@mastoxiv.page
2024-02-12 08:30:32

This arxiv.org/abs/2309.08968 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-02-12 08:30:32

This arxiv.org/abs/2309.08968 has been replaced.
link: scholar.google.com/scholar?q=a

@arXiv_csCL_bot@mastoxiv.page
2024-03-13 06:48:05

Knowledge Graph Large Language Model (KG-LLM) for Link Prediction
Dong Shu, Tianle Chen, Mingyu Jin, Yiting Zhang, Mengnan Du, Yongfeng Zhang
arxiv.org/abs/2403.07311