Latest 15 Papers - April 22, 2025
Latest 15 Papers - April 22, 2025
Knowledge Editing
Knowledge editing is a crucial aspect of natural language processing (NLP) and artificial intelligence (AI). It involves modifying the knowledge encoded in large language models (LLMs) to improve their performance, accuracy, and adaptability. In this section, we will discuss the latest papers on knowledge editing, including papers on in-context editing, model editing, and GUI agents.
Can We Edit LLMs for Long-Tail Biomedical Knowledge?
In this paper, the authors investigate the possibility of editing LLMs for long-tail biomedical knowledge. They propose a novel approach that combines in-context editing with a knowledge graph-based framework to improve the model's performance on long-tail biomedical knowledge. The results show that the proposed approach can significantly improve the model's performance on long-tail biomedical knowledge.
Date: 2025-04-14 Comment:
How to Make LLMs Forget: On Reversing In-Context Knowledge Edits
This paper focuses on the problem of reversing in-context knowledge edits in LLMs. The authors propose a novel approach that uses a reinforcement learning-based framework to reverse the edits and restore the original knowledge in the model. The results show that the proposed approach can effectively reverse the edits and restore the original knowledge in the model.
Date: 2025-04-10 Comment: Accepted at NAACL Main 2025
CodeUpdateArena: Benchmarking Knowledge Editing on API Updates
In this paper, the authors propose a novel benchmarking framework for knowledge editing on API updates. They introduce a new dataset and a set of evaluation metrics to assess the performance of knowledge editing models on API updates. The results show that the proposed framework can effectively evaluate the performance of knowledge editing models on API updates.
Date: 2025-04-03 Comment: Under Review
KE: Matryoshka Unstructured Knowledge Editing of Large Language Models
This paper proposes a novel approach to unstructured knowledge editing of LLMs. The authors introduce a matryoshka structure to represent the knowledge in the model and propose a set of operations to edit the knowledge. The results show that the proposed approach can effectively edit the knowledge in the model and improve its performance.
Date: 2025-04-01 Comment: 16 pages, 6 figures
In-Context Editing: Learning Knowledge from Self-Induced Distributions
In this paper, the authors propose a novel approach to in-context editing of LLMs. They introduce a self-induced distribution to represent the knowledge in the model and propose a set of operations to edit the knowledge. The results show that the proposed approach can effectively edit the knowledge in the model and improve its performance.
Date: 2025-03-30 Comment:
Identifying Multi-modal Knowledge Neurons in Pretrained Transformers via Two-stage Filtering
This paper focuses on the problem of identifying multi-modal knowledge neurons in pretrained transformers. The authors propose a novel approach that uses a two-stage filtering framework to identify the knowledge neurons. The results show that the proposed approach can effectively identify the knowledge neurons and improve the model's performance.
Date: 2025-03-29 Comment:### AnyEdit: Edit Any Knowledge Encoded in Language Models
In this paper, the authors propose a novel approach to editing any knowledge encoded in LLMs. They introduce a set of operations to edit the knowledge and propose a framework to evaluate the performance of the model. The results show that the proposed approach can effectively edit the knowledge in the model and improve its performance.
Date: 2025-03-27 Comment:
ADS-Edit: A Multimodal Knowledge Editing Dataset for Autonomous Driving Systems
This paper proposes a novel dataset for multimodal knowledge editing in autonomous driving systems. The authors introduce a set of tasks and a framework to evaluate the performance of the model. The results show that the proposed dataset can effectively evaluate the performance of the model and improve its performance.
Date: 2025-03-26 Comment: Work in progress
CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners
In this paper, the authors propose a novel approach to circuit-aware editing of LLMs. They introduce a set of operations to edit the knowledge and propose a framework to evaluate the performance of the model. The results show that the proposed approach can effectively edit the knowledge in the model and improve its performance.
Date: 2025-03-20 Comment: Work in progress
Breaking Boundaries: Investigating the Effects of Model Editing on Cross-linguistic Performance
This paper focuses on the problem of investigating the effects of model editing on cross-linguistic performance. The authors propose a novel approach that uses a set of operations to edit the knowledge and evaluate the performance of the model on cross-linguistic tasks. The results show that the proposed approach can effectively evaluate the performance of the model on cross-linguistic tasks.
Date: 2025-03-18 Comment: Accepted at NAACL 2025 (Industry track)
Precise Localization of Memories: A Fine-grained Neuron-level Knowledge Editing Technique for LLMs
In this paper, the authors propose a novel approach to precise localization of memories in LLMs. They introduce a set of operations to edit the knowledge and propose a framework to evaluate the performance of the model. The results show that the proposed approach can effectively edit the knowledge in the model and improve its performance.
Date: 2025-03-17 Comment: ICLR 2025
Composable Interventions for Language Models
This paper proposes a novel approach to composable interventions for LLMs. The authors introduce a set of operations to edit the knowledge and propose a framework to evaluate the performance of the model. The results show that the proposed approach can effectively edit the knowledge in the model and improve its performance.
Date: 2025-03-16 Comment: Published at ICLR 2025
Resolving UnderEdit & OverEdit with Iterative & Neighbor-Assisted Model Editing
In this paper, the authors propose a novel approach to resolving underedit and overedit in LLMs. They introduce a set of operations to edit the knowledge and propose a framework to evaluate the performance of the model. The results show that the proposed approach can effectively edit the knowledge in the model and improve its performance.
Date: 2025-03- Comment: Under Review @ ACL'25
Lifelong Knowledge Editing for LLMs with Retrieval-Augmented Continuous Prompt Learning
This paper focuses on the problem of lifelong knowledge editing for LLMs with retrieval-augmented continuous prompt learning. The authors propose a novel approach that uses a set of operations to edit the knowledge and evaluate the performance of the model on lifelong tasks. The results show that the proposed approach can effectively evaluate the performance of the model on lifelong tasks.
Date: 2025-03-14 Comment: EMNLP 2024 main
Lifelong Knowledge Editing for Vision Language Models with Low-Rank Mixture-of-Experts
In this paper, the authors propose a novel approach to lifelong knowledge editing for vision language models with low-rank mixture-of-experts. They introduce a set of operations to edit the knowledge and propose a framework to evaluate the performance of the model. The results show that the proposed approach can effectively edit the knowledge in the model and improve its performance.
Date: 2025-03-14 Comment: CVPR 2025 Accepted
Model Editing
Model editing is a crucial aspect of NLP and AI. It involves modifying the model's architecture, parameters, or training data to improve its performance, accuracy, and adaptability. In this section, we will discuss the latest papers on model editing, including papers on in-context editing, knowledge editing, and GUI agents.
When is Task Vector Provably Effective for Model Editing? A Generalization Analysis of Nonlinear Transformers
This paper focuses on the problem of when task vector is provably effective for model editing. The authors propose a novel approach that uses a generalization analysis of nonlinear transformers to evaluate the effectiveness of task vector for model editing. The results show that the proposed approach can effectively evaluate the effectiveness of task vector for model editing.
Date: 2025-04-18 Comment: Published at ICLR 2025 as an oral paper
MemLLM: Finetuning LLMs to Use An Explicit Read-Write Memory
In this paper, the authors propose a novel approach to finetuning LLMs to use an explicit read-write memory. They introduce a set of operations to edit the knowledge and propose a framework to evaluate the performance of the model. The results show that the proposed approach can effectively edit the knowledge in the model and improve its performance.
Date: 2025-04-17 Comment: Published in Transactions on Machine Learning Research (TMLR)
An Adversarial Perspective on Machine Unlearning for AI Safety
This paper focuses on the problem of machine unlearning for AI safety. The authors propose a novel approach that uses an adversarial perspective to evaluate the effectiveness of machine unlearning for AI safety. The results show that the proposed approach can effectively evaluate the effectiveness of machine unlearning for AI safety.
Date: 2025-04-10 Comment: Final version published in Transactions on Machine Learning Research (TMLR); Best technical paper at Neurips 2024 SoLaR workshop
Pretraining Language Models for Diachronic Linguistic Change Discovery
In this paper, the authors propose a novel approach to pretraining language models for diachronic linguistic change discovery.
Q&A: Latest 15 Papers - April 22, 2025
Knowledge Editing
Q: What is knowledge editing in the context of large language models (LLMs)? A: Knowledge editing is a process of modifying the knowledge encoded in LLMs to improve their performance, accuracy, and adaptability.
Q: What are some of the challenges associated with knowledge editing in LLMs? A: Some of the challenges associated with knowledge editing in LLMs include the need to preserve the original knowledge in the model, the risk of overediting or underediting the knowledge, and the difficulty of evaluating the effectiveness of knowledge editing.
Q: What is in-context editing, and how does it differ from traditional knowledge editing? A: In-context editing is a type of knowledge editing that involves modifying the knowledge in the model in the context of a specific task or prompt. It differs from traditional knowledge editing in that it is more focused on the specific task or prompt, rather than the general knowledge in the model.
Q: What is the role of GUI agents in knowledge editing, and how do they differ from traditional knowledge editing? A: GUI agents are a type of knowledge editing that involves modifying the knowledge in the model in the context of a specific GUI or interface. They differ from traditional knowledge editing in that they are more focused on the specific GUI or interface, rather than the general knowledge in the model.
Model Editing
Q: What is model editing, and how does it differ from knowledge editing? A: Model editing is a process of modifying the model's architecture, parameters, or training data to improve its performance, accuracy, and adaptability. It differs from knowledge editing in that it is more focused on the model itself, rather than the knowledge in the model.
Q: What are some of the challenges associated with model editing in LLMs? A: Some of the challenges associated with model editing in LLMs include the need to preserve the original performance of the model, the risk of overediting or underediting the model, and the difficulty of evaluating the effectiveness of model editing.
Q: What is the role of task vector in model editing, and how does it differ from traditional model editing? A: Task vector is a type of model editing that involves modifying the model's architecture or parameters to improve its performance on a specific task or prompt. It differs from traditional model editing in that it is more focused on the specific task or prompt, rather than the general performance of the model.
GUI Agents
Q: What is a GUI agent, and how does it differ from traditional knowledge editing? A: A GUI agent is a type of knowledge editing that involves modifying the knowledge in the model in the context of a specific GUI or interface. It differs from traditional knowledge editing in that it is more focused on the specific GUI or interface, rather than the general knowledge in the model.
Q: What are some of the challenges associated with GUI agents in LLMs? A: Some of the challenges associated with GUI agents in LLMs include the need to preserve the original performance of the model, the risk of overediting or underediting the model, and the difficulty of evaluating the effectiveness of GUI agents.
Q: What is the role of activation steering in GUI agents, and how does it differ from traditional GUI agents? A: Activation steering is a type of GUI that involves modifying the model's activation patterns to improve its performance on a specific task or prompt. It differs from traditional GUI agents in that it is more focused on the specific task or prompt, rather than the general performance of the model.
Steering Vector
Q: What is a steering vector, and how does it differ from traditional knowledge editing? A: A steering vector is a type of knowledge editing that involves modifying the model's knowledge in the context of a specific task or prompt. It differs from traditional knowledge editing in that it is more focused on the specific task or prompt, rather than the general knowledge in the model.
Q: What are some of the challenges associated with steering vectors in LLMs? A: Some of the challenges associated with steering vectors in LLMs include the need to preserve the original performance of the model, the risk of overediting or underediting the model, and the difficulty of evaluating the effectiveness of steering vectors.
Q: What is the role of feature-guided activation additions in steering vectors, and how does it differ from traditional steering vectors? A: Feature-guided activation additions is a type of steering vector that involves modifying the model's activation patterns to improve its performance on a specific task or prompt. It differs from traditional steering vectors in that it is more focused on the specific task or prompt, rather than the general performance of the model.
Efficient LLM
Q: What is an efficient LLM, and how does it differ from traditional LLMs? A: An efficient LLM is a type of LLM that is designed to be more efficient in terms of computational resources and memory usage. It differs from traditional LLMs in that it is more focused on reducing the computational resources and memory usage, rather than improving the performance of the model.
Q: What are some of the challenges associated with efficient LLMs in LLMs? A: Some of the challenges associated with efficient LLMs in LLMs include the need to preserve the original performance of the model, the risk of overediting or underediting the model, and the difficulty of evaluating the effectiveness of efficient LLMs.
Q: What is the role of task-localized sparse fine-tuning in efficient LLMs, and how does it differ from traditional efficient LLMs? A: Task-localized sparse fine-tuning is a type of efficient LLM that involves modifying the model's architecture or parameters to improve its performance on a specific task or prompt. It differs from traditional efficient LLMs in that it is more focused on the specific task or prompt, rather than the general performance of the model.